Tag Archives: Jenkins

Simplify Your Jenkins Builds with AWS CodeBuild

Post Syndicated from Paul Roberts original https://aws.amazon.com/blogs/devops/simplify-your-jenkins-builds-with-aws-codebuild/

Jeff Bezos famously said, “There’s a lot of undifferentiated heavy lifting that stands between your idea and that success.” He went on to say, “…70% of your time, energy, and dollars go into the undifferentiated heavy lifting and only 30% of your energy, time, and dollars gets to go into the core kernel of your idea.”

If you subscribe to this maxim, you should not be spending valuable time focusing on operational issues related to maintaining the Jenkins build infrastructure. Companies such as Riot Games have over 1.25 million builds per year and have written several lengthy blog posts about their experiences designing a complex, custom Docker-powered Jenkins build farm. Dealing with Jenkins slaves at scale is a job in itself and Riot has engineers focused on managing the build infrastructure.

Typical Jenkins Build Farm

 

As with all technology, the Jenkins build farm architectures have evolved. Today, instead of manually building your own container infrastructure, there are Jenkins Docker plugins available to help reduce the operational burden of maintaining these environments. There is also a community-contributed Amazon EC2 Container Service (Amazon ECS) plugin that helps remove some of the overhead, but you still need to configure and manage the overall Amazon ECS environment.

There are various ways to create and manage your Jenkins build farm, but there has to be a way that significantly reduces your operational overhead.

Introducing AWS CodeBuild

AWS CodeBuild is a fully managed build service that removes the undifferentiated heavy lifting of provisioning, managing, and scaling your own build servers. With CodeBuild, there is no software to install, patch, or update. CodeBuild scales up automatically to meet the needs of your development teams. In addition, CodeBuild is an on-demand service where you pay as you go. You are charged based only on the number of minutes it takes to complete your build.

One AWS customer, Recruiterbox, helps companies hire simply and predictably through their software platform. Two years ago, they began feeling the operational pain of maintaining their own Jenkins build farms. They briefly considered moving to Amazon ECS, but chose an even easier path forward instead. Recuiterbox transitioned to using Jenkins with CodeBuild and are very happy with the results. You can read more about their journey here.

Solution Overview: Jenkins and CodeBuild

To remove the heavy lifting from managing your Jenkins build farm, AWS has developed a Jenkins AWS CodeBuild plugin. After the plugin has been enabled, a developer can configure a Jenkins project to pick up new commits from their chosen source code repository and automatically run the associated builds. After the build is successful, it will create an artifact that is stored inside an S3 bucket that you have configured. If an error is detected somewhere, CodeBuild will capture the output and send it to Amazon CloudWatch logs. In addition to storing the logs on CloudWatch, Jenkins also captures the error so you do not have to go hunting for log files for your build.

 

AWS CodeBuild with Jenkins Plugin

 

The following example uses AWS CodeCommit (Git) as the source control management (SCM) and Amazon S3 for build artifact storage. Logs are stored in CloudWatch. A development pipeline that uses Jenkins with CodeBuild plugin architecture looks something like this:

 

AWS CodeBuild Diagram

Initial Solution Setup

To keep this blog post succinct, I assume that you are using the following components on AWS already and have applied the appropriate IAM policies:

·         AWS CodeCommit repo.

·         Amazon S3 bucket for CodeBuild artifacts.

·         SNS notification for text messaging of the Jenkins admin password.

·         IAM user’s key and secret.

·         A role that has a policy with these permissions. Be sure to edit the ARNs with your region, account, and resource name. Use this role in the AWS CloudFormation template referred to later in this post.

 

Jenkins Installation with CodeBuild Plugin Enabled

To make the integration with Jenkins as frictionless as possible, I have created an AWS CloudFormation template here: https://s3.amazonaws.com/proberts-public/jenkins.yaml. Download the template, sign in the AWS CloudFormation console, and then use the template to create a stack.

 

CloudFormation Inputs

Jenkins Project Configuration

After the stack is complete, log in to the Jenkins EC2 instance using the user name “admin” and the password sent to your mobile device. Now that you have logged in to Jenkins, you need to create your first project. Start with a Freestyle project and configure the parameters based on your CodeBuild and CodeCommit settings.

 

AWS CodeBuild Plugin Configuration in Jenkins

 

Additional Jenkins AWS CodeBuild Plugin Configuration

 

After you have configured the Jenkins project appropriately you should be able to check your build status on the Jenkins polling log under your project settings:

 

Jenkins Polling Log

 

Now that Jenkins is polling CodeCommit, you can check the CodeBuild dashboard under your Jenkins project to confirm your build was successful:

Jenkins AWS CodeBuild Dashboard

Wrapping Up

In a matter of minutes, you have been able to provision Jenkins with the AWS CodeBuild plugin. This will greatly simplify your build infrastructure management. Now kick back and relax while CodeBuild does all the heavy lifting!


About the Author

Paul Roberts is a Strategic Solutions Architect for Amazon Web Services. When he is not working on Serverless, DevOps, or Artificial Intelligence, he is often found in Lake Tahoe exploring the various mountain ranges with his family.

timeShift(GrafanaBuzz, 1w) Issue 11

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/09/01/timeshiftgrafanabuzz-1w-issue-11/

September is here and summer is officially drawing to a close, but the Grafana team has stayed busy. We’re prepping for an upcoming Grafana 4.5 release, had some new and updated plugins, and would like to thank two contributors for fixing a non-obvious bug. Also – The CFP for GrafanaCon EU is open, and we’d like you to speak!


GrafanaCon EU CFP is Open

Have a big idea to share? Have a shorter talk or a demo you’d like to show off?
We’re looking for 40-minute detailed talks, 20-minute general talks and 10-minute lightning talks. We have a perfect slot for any type of content.

I’d Like to Speak at GrafanaCon

Grafana Labs is Hiring!

Do you believe in open source software? Build the future with us, and ship code.

Check out our open positions

From the Blogosphere

Zabbix, Grafana and Python, a Match Made in Heaven: David’s article, published earlier this year, hits on some great points about open source software and how you don’t have to spend much (or any) money to get valuable monitoring for your infrastructure.

The Business of Democratizing Metrics: Our friends over at Packet stopped by the office recently to sit down and chat with the Grafana Labs co-founders. They discussed how Grafana started, how monitoring has evolved, and democratizing metrics.

Visualizing CloudWatch with Grafana: Yuzo put together an article outlining his first experience adding a CloudWatch data source in Grafana, importing his first dashboard, then comparing the graphs between Grafana and CloudWatch.

Monitoring Linux performance with Grafana: Jim wanted to monitor his CentOS home router to get network traffic and disk usage stats, but wanted to try something different than his previous cacti monitoring. This walkthrough shows how he set things up to collect, store and visualize the data.

Visualizing Jenkins Pipeline Results in Grafana: Piotr provides a walkthrough of his setup and configuration to view Jenkins build results for his continuous delivery environment in Grafana.


Grafana Plugins

This week we’ve added a plugin for the new time series database Sidewinder, and updates to the Carpet Plot graph panel. If you haven’t installed a plugin, it’s easy. For on-premises installations, the Grafana-cli will do the work for you. If you’re using Hosted Grafana, you can install any plugin with one click.

NEW PLUGIN

Sidewinder Data Source – This is a data source plugin for the new Sidewinder database. Sidewinder is an open source, fast time series database designed for real-time analytics. It can be used for a variety of use cases that need storage of metrics data like APM and IoT.

Install Now

UPDATED PLUGIN

Carpet Plot Panel – This plugin received an update, which includes the following features and fixes:

  • New aggregate functions: Min, Max, First, Last
  • Possibility to invert color scheme
  • Possibility to change X axis label format
  • Possibility to hide X and Y axis labels

Update Now


This week’s MVC (Most Valuable Contributor)

This week we want to thank two contributors who worked together to fix a non-obvious bug in the new MySQL data source (a bug with sorting values in the legend).

robinsonjj
Thank you Joe, for tackling this issue and submitting a PR with an initial fix.

pdoan017
pdoan017 took robinsonjj’s contribution and added a new PR to retain the order in which keys are added.

Thank you both for taking the time to both troubleshoot and fix the issue. Much appreciated!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Nice! Combining different panel types on a dashboard can add more context to your data – Looks like a very functional dashboard.


What do you think?

Let us know how we’re doing! Submit a comment on this article below, or post something at our community forum. Help us make these roundups better and better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Validating AWS CloudFormation Templates

Post Syndicated from Remek Hetman original https://aws.amazon.com/blogs/devops/validating-aws-cloudformation-templates/

For their continuous integration and continuous deployment (CI/CD) pipeline path, many companies use tools like Jenkins, Chef, and AWS CloudFormation. Usually, the process is managed by two or more teams. One team is responsible for designing and developing an application, CloudFormation templates, and so on. The other team is generally responsible for integration and deployment.

One of the challenges that a CI/CD team has is to validate the CloudFormation templates provided by the development team. Validation provides early warning about any incorrect syntax and ensures that the development team follows company policies in terms of security and the resources created by CloudFormation templates.

In this post, I focus on the validation of AWS CloudFormation templates for syntax as well as in the context of business rules.

Scripted validation solution

For CloudFormation syntax validation, one option is to use the AWS CLI to call the validate-template command. For security and resource management, another approach is to run a Jenkins pipeline from an Amazon EC2 instance under an EC2 role that has been granted only the necessary permissions.

What if you need more control over your CloudFormation templates, such as managing parameters or attributes? What if you have many development teams where permissions to the AWS environment required by one team are either too open or not open enough for another team?

To have more control over the contents of your CloudFormation template, you can use the cf-validator Python script, which shows you how to validate different template aspects. With this script, you can validate:

  • JSON syntax
  • IAM capabilities
  • Root tags
  • Parameters
  • CloudFormation resources
  • Attributes
  • Reference resources

You can download this script from the cf-validator GitHub repo. Use the following command to run the script:

python cf-validator.py

The script takes the following parameters:

  • –cf_path [Required]

    The location of the CloudFormation template in JSON format. Supported location types:

    • File system – Path to the CloudFormation template on the file system
    • Web – URL, for example, https://my-file.com/my_cf.json
    • Amazon S3 – Amazon S3 bucket, for example, s3://my_bucket/my_cf.json
  • –cf_rules [Required]

    The location of the JSON file with the validation rules. This parameter supports the same locations as –cf_path. The next section of this post has more information about defining rules.

  • –cf_res [Optional]

    The location of the JSON file with the defined AWS resources, which need to be confirmed before launching the CloudFormation template. A later section of this post has more information about resource validation.

  • –allow_cap [Optional][yes/no]

    Controls whether you allow the creation of IAM resources by the CloudFormation template, such as policies, rules, or IAM users. The default value is no.

  • –region [Optional]

    The AWS region where the existing resources were created. The default value is us-east-1.

Defining rules

All rules are defined in the JSON format file. Rules consist of the following keys:

  • “allow_root_keys”

    Lists allowed root CloudFormation keys. Example of root keys are Parameters, Resources, Output, and so on. An empty list means that any key is allowed.

  • “allow_parameters”

    Lists allowed CloudFormation parameters. For instance, to force each CloudFormation template to use only the set of parameters defined in your pipeline, list them under this key. An empty list means that any parameter is allowed.

  • “allow_resources”

    Lists the AWS resources allowed for creation by a CloudFormation template. The format of the resource is the same as resource types in CloudFormation, but without the “AWS::” prefix. Examples:  EC2::Instance, EC2::Volume, and so on. If you allow the creation of all resources from the given group, you can use a wildcard. For instance, if you allow all resources related to CloudFormation, you can add CloudFormation::* to the list instead of typing CloudFormation::Init, CloudFormation:Stack, and so on. An empty list means that all resources are allowed.

  • “require_ref_attributes”

    Lists attributes (per resource) that have to be defined in CloudFormation. The value must be referenced and cannot be hardcoded. For instance, you can require that each EC2 instance must be created from a specific AMI where Image ID has to be a passed-in parameter. An empty list means that you are not requiring specific attributes to be present for a given resource.

  • “allow_additional_attributes”

    Lists additional attributes (per resource) that can be defined and have any value in the CloudFormation template. An empty list means that any additional attribute is allowed. If you specify additional attributes for this key, then any resource attribute defined in a CloudFormation template that is not listed in this key or in the require_ref_attributes key causes validation to fail.

  • “not_allow_attributes”

    Lists attributes (per resource) that are not allowed in the CloudFormation template. This key takes precedence over the require_ref_attributes and allow_additional_attributes keys.

Rule file example

The following is an example of a rule file:

{
  "allow_root_keys" : ["AWSTemplateFormatVersion", "Description", "Parameters", "Conditions", "Resources", "Outputs"],
  "allow_parameters" : [],
  "allow_resources" : [
    "CloudFormation::*",
    "CloudWatch::Alarm",
    "EC2::Instance",
    "EC2::Volume",
    "EC2::VolumeAttachment",
    "ElasticLoadBalancing::LoadBalancer",
    "IAM::Role",
    "IAM::Policy",
    "IAM::InstanceProfile"
  ],
  "require_ref_attributes" :
    {
      "EC2::Instance" : [ "InstanceType", "ImageId", "SecurityGroupIds", "SubnetId", "KeyName", "IamInstanceProfile" ],
      "ElasticLoadBalancing::LoadBalancer" : ["SecurityGroups", "Subnets"]
    },
  "allow_additional_attributes" : {},
  "not_allow_attributes" : {}
}

Validating resources

You can use the –cf_res parameter to validate that the resources you are planning to reference in the CloudFormation template exist and are available. As a value for this parameter, point to the JSON file with defined resources. The format should be as follows:

[
  { "Type" : "SG",
    "ID" : "sg-37c9b448A"
  },
  { "Type" : "AMI",
    "ID" : "ami-e7e523f1"
  },
  { "Type" : "Subnet",
    "ID" : "subnet-034e262e"
  }
]

Summary

At this moment, this CloudFormation template validation script supports only security groups, AMIs, and subnets. But anyone with some knowledge of Python and the boto3 package can add support for additional resources type, as needed.

For more tips please visit our AWS CloudFormation blog

Security updates for Friday

Post Syndicated from ris original https://lwn.net/Articles/721302/rss

Security updates have been issued by Arch Linux (jenkins, libtiff, and webkit2gtk), Debian (ghostscript, kernel, and libreoffice), Fedora (dovecot, kernel, and tomcat), Mageia (firefox and tomcat), openSUSE (backintime and ffmpeg), and Ubuntu (ghostscript, libxslt, and nss).

Security updates for Friday

Post Syndicated from jake original https://lwn.net/Articles/720634/rss

Security updates have been issued by CentOS (bind, firefox, java-1.8.0-openjdk, and nss and nss-util), Debian (icedove), Fedora (jenkins-xstream and xstream), Mageia (chromium-browser-stable, flash-player-plugin, gimp, and wireshark), openSUSE (gstreamer-0_10-plugins-base), Oracle (bind, firefox, java-1.8.0-openjdk, and nss and nss-util), Red Hat (firefox and java-1.8.0-openjdk), Scientific Linux (bind, firefox, nss and nss-util, and nss-util), SUSE (xen), and Ubuntu (bind9, curl, freetype, and qemu).

Streamline AMI Maintenance and Patching Using Amazon EC2 Systems Manager | Automation

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/streamline-ami-maintenance-and-patching-using-amazon-ec2-systems-manager-automation/

Here to tell you about using Automation for streamline AMI maintenance and patching is Taylor Anderson, a Senior Product Manager with EC2.

-Ana


 

Last December at re:Invent, we launched Amazon EC2 Systems Manager, which helps you automatically collect software inventory, apply OS patches, create system images, and configure Windows and Linux operating systems. These capabilities enable automated configuration and ongoing management of systems at scale, and help maintain software compliance for instances running in Amazon EC2 or on-premises.

One feature within Systems Manager is Automation, which can be used to patch, update agents, or bake applications into an Amazon Machine Image (AMI). With Automation, you can avoid the time and effort associated with manual image updates, and instead build AMIs through a streamlined, repeatable, and auditable process.

Recently, we released the first public document for Automation: AWS-UpdateLinuxAmi. This document allows you to automate patching of Ubuntu, CentOS, RHEL, and Amazon Linux AMIs, as well as automating the installation of additional site-specific packages and configurations.

More importantly, it makes it easy to get started with Automation, eliminating the need to first write an Automation document. AWS-UpdateLinuxAmi can also be used as a template when building your own Automation workflow. Windows users can expect the equivalent document―AWS-UpdateWindowsAmi―in the coming weeks.

AWS-UpdateLinuxAmi automates the following workflow:

  1. Launch a temporary EC2 instance from a source Linux AMI.
  2. Update the instance.
    • Invoke a user-provided, pre-update hook script on the instance (optional).
    • Update any AWS tools and agents on the instance, if present.
    • Update the instance’s distribution packages using the native package manager.
    • Invoke a user-provided post-update hook script on the instance (optional).
  3. Stop the temporary instance.
  4. Create a new AMI from the stopped instance.
  5. Terminate the instance.

Warning: Creation of an AMI from a running instance carries a risk that credentials, secrets, or other confidential information from that instance may be recorded to the new image. Use caution when managing AMIs created by this process.

Configuring roles and permissions for Automation

If you haven’t used Automation before, you need to configure IAM roles and permissions. This includes creating a service role for Automation, assigning a passrole to authorize a user to provide the service role, and creating an instance role to enable instance management under Systems Manager. For more details, see Configuring Access to Automation.

Executing Automation

      1. In the EC2 console, choose Systems Manager Services, Automations.
      2. Choose Run automation document
      3. Expand Document name and choose AWS-UpdateLinuxAmi.
      4. Choose the latest document version.
      5.  For SourceAmiId, enter the ID of the Linux AMI to update.
      6. For InstanceIamRole, enter the name of the instance role you created enabling Systems Manager to manage an instance (that is, it includes the AmazonEC2RoleforSSM managed policy). For more details, see Configuring Access to Automation.
      7.  For AutomationAssumeRole, enter the ARN of the service role you created for Automation. For more details, see Configuring Access to Automation.
      8.  Choose Run Automation.
      9. Monitor progress in the Automation Steps tab, and view step-level outputs.

After execution is complete, choose Description to view any outputs returned by the workflow. In this example, AWS-UpdateLinuxAmi returns the new AMI ID.

Next, choose Images, AMIs to view your new AMI.

There is no additional charge to use the Automation service, and any resources created by a workflow incur nominal charges. Note that if you terminate AWS-UpdateLinuxAmi before reaching the “Terminate Instance” step, shut down the temporary instance created by the workflow.

A CLI walkthrough of the above steps can be found at Automation CLI Walkthrough: Patch a Linux AMI.

Conclusion

Now that you’ve successfully run AWS-UpdateLinuxAmi, you may want to create default values for the service and instance roles. You can customize your workflow by creating your own Automation document based on AWS-UpdateLinuxAmi. For more details, see Create an Automaton Document. After you’ve created your document, you can write additional steps and add them to the workflow.

Example steps include:

      • Updating an Auto Scaling group with the new AMI ID (aws:invokeLambdaFunction action type)
      • Creating an encrypted copy of your new AMI (aws:encrypedCopy action type)
      • Validating your new AMI using Run Command with the RunPowerShellScript document (aws:runCommand action type)

Automation also makes a great addition to a CI/CD pipeline for application bake-in, and can be invoked as a CLI build step in Jenkins. For details on these examples, be sure to check out the Automation technical documentation. For updates on Automation, Amazon EC2 Systems Manager, Amazon CloudFormation, AWS Config, AWS OpsWorks and other management services, be sure to follow the all-new Management Tools blog.

 

Replicating and Automating Sync-Ups for a Repository with AWS CodeCommit

Post Syndicated from Cherry Zhou original https://aws.amazon.com/blogs/devops/replicating-and-automating-sync-ups-for-a-repository-with-aws-codecommit/

by Chenwei (Cherry) Zhou, Software Development Engineer


 

Many of our customers have expressed interest in the following scenarios:

  • Backing up or replicating an AWS CodeCommit repository to another AWS region.
  • Automatically backing up repositories currently hosted on other services (for example, GitHub or BitBucket) to AWS CodeCommit.

In this blog post, we’ll show you how to automate the replication of a source repository to a repository in AWS CodeCommit. Your source repository could be another AWS CodeCommit repository, a local repository, or a repository hosted on other Git services.

To replicate your repository, you’ll first need to set up a repository in AWS CodeCommit to use as your backup/replica repository. After replicating the contents in your source repository to the backup repository, we’ll demonstrate how you can set up a scheduled job to periodically sync up your source repository with the backup/replica.

Where do I host this?

You can host your local repository and schedule your task on your own machine or on an Amazon EC2 instance. For an example of how to set up an EC2 instance for access to an AWS CodeCommit repository, including a sample AWS CloudFormation template for launching the instance, see Launch an Amazon EC2 Instance to Access the AWS CodeCommit Repository in the AWS for DevOps Guide.

 

Part 1: Set Up a Replica Repository

In this section, we’ll create an AWS CodeCommit repository and replicate your source repository to it.

  1. If you haven’t already done so, set up for AWS CodeCommit. Then follow the steps to create a CodeCommit repository in the region of your choice. Choose a name that will help you remember that this repository is a replica or backup repository. For example, you could create a repository in the US East (Ohio) region and name it MyReplicaRepo. This is the name and region we’ll use in this post.
  2. Use the git clone --mirror command to clone the source repository, including the directory where you want to create the local repo, to your local computer. You are not cloning the repository you just created in AWS CodeCommit. You are cloning the repository you want to replicate or back up to that AWS CodeCommit repository. For example, to clone a sample application created for AWS demonstration purposes and hosted on GitHub (https://github.com/awslabs/aws-demo-php-simple-app.git) to a local repo in a directory named my-repo-replica:
git clone --mirror https://github.com/awslabs/aws-demo-php-simple-app.git my-repo-replica

IMPORTANT

  • DO NOT use your working directory as the local clone repository. Your work-in-progress commits would also be pushed for backup.
  • DO NOT make local changes to this local repository. It should be used for sync-up operations only.
  • DO NOT manually push any changes to this replica repository. It will cause conflicts later when your scheduled job pushes changes in the source repository. Treat it as a read-only repository, and push all of your development changes to your source repository.
  1. Change directories to the directory where you made the clone:
cd my-repo-replica
  1. Use the git remote add RemoteName RemoteRepositoryURL command to add the AWS CodeCommit repository you created as a remote repository for the local repo. Use an appropriate nickname, such as sync. (Because this is a mirror, the default nickname, origin, will already be in use.) For example, to add your AWS CodeCommit repository MyReplicaRepo as a remote for my-repo-replica with the nickname sync:
git remote add sync ssh://git-codecommit.us-east-2.amazonaws.com/v1/repos/MyReplicaRepo

When you push large repositories, consider using SSH instead of HTTPS. When you push a large change, a large number of changes, or a large repository, long-running HTTPS connections are often terminated prematurely due to networking issues or firewall settings. For more information about setting up AWS CodeCommit for SSH, see For SSH Connections on Linux, macOS, or Unix or For SSH Connections on Windows.

Tip

Use the git remote show command to review the list of remotes set for your local repo.

  1. Run the git push sync --mirror command to push to your replica repository.
  • If you named your remote for the replica repository something else, replace sync with your remote name.
  • The --mirror option specifies that all refs under refs/ (which includes, but is not limited to, refs/heads/, refs/remotes/, and refs/tags/) will be mirrored to the remote repository. If you only want to push branches and commits, but don’t care if you push other references such as tags, you can use the --all option instead.

 

Your replica repository is now ready for sync-up operations. To do a manual sync, run git pull to pull from your original repository, and then run git push sync --mirror to push to the replica repository. Again, do not push any local changes to your replica repository at any time.

 

Part 2: Create a Periodic Sync Job

You can use a number of tools to set up an automated sync job. In this section, we’ll briefly cover four common tools: a cron job (Linux), a task in Windows Task Scheduler (Windows), a launchd instance (macOS), and, for those users who already have a Jenkins server set up, a Freestyle project with build triggers. Feel free to use whatever tools are best for you.

Note

Some hosted repositories offer options for syncing repositories, such as Git hooks, notifications, and other triggers. To learn more about those options, consult the documentation for your source repository system.

 

All of the following approaches rely on commands that pull the latest changes from the source repository to your local clone repo, and then mirror those changes to your AWS CodeCommit repository. They can be summed up as follows:

cd /path/to/your/local/repo git pull
git push sync --mirror

Where and how you save and schedule these commands depends on your operating system and tool(s). We’ve included just a few options/examples from a variety of approaches.

 

In Linux:

  1. At the terminal, run the crontab -e command to edit your crontab file in your default editor.
  2. Add a line for a new cron job that will change directories to your local clone repo, pull from your source repository, and mirror any changes to your AWS CodeCommit repository on the schedule you specify. For example, to run a daily job at 2:45 A.M. for a local repo named my-repo-replica in the ~/tmp directory where you nicknamed your remote (the AWS CodeCommit repository) sync, your new line might look like this:
45 2 * * * cd ~/tmp/my-repo-replica && git pull && git push sync --mirror
  1. Save the crontab file and exit your editor.

 

In Windows:

  1. Create a batch file that contains the command to change directories to your local clone repo, pull from your source repository, and mirror any changes up to your AWS CodeCommit repository. For example, if you created your local repo my-repo-replica in a c:\temp directory, and you nicknamed your remote (the AWS CodeCommit repository) sync, your file might look like this:
cd /d c:\temp\my-repo-replica
git pull
git push sync --mirror
  1. Save the batch file with a name like my-repo-backup.bat.
  2. Open Task Scheduler. (Not sure how? The simplest way is to open a command line and run msc.)
  3. In Actions, choose Create Basic Task, and then follow the steps in the wizard.

 

In macOS:

  1. Create a shell script that contains the command to change directories to your local clone repo, pull from your source repository, and mirror any changes up to your AWS CodeCommit repository. For example, if you created your local repo my-repo-replica in a ~/Documents directory, and you nicknamed your remote (the AWS CodeCommit repository) sync, your file might look like this:
cd ~/Documents/my-repo-replica
git pull
git push sync --mirror
  1. Save the shell script with a name like my-repo-backup.sh.
  2. Create a launchd property list file that runs the shell script on the schedule you specify. For example, if you stored my-repo-backup.sh in ~/Documents, to run the script daily at 2:45 A.M., your plist file might look like this:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.example.codecommit.backup</string>
    <key>ProgramArguments</key>
    <array>
        <string>~/Documents/my-repo-backup.sh</string>
    </array>
    <key>StartCalendarInterval</key>
    <dict>
        <key>Minute</key>
        <integer>45</integer>
        <key>Hour</key>
        <integer>2</integer>
    </dict>
</dict>
</plist>
  1. Save your plist file in ~/Library/LaunchAgents, /Library/LaunchAgents, or /Library/LaunchDaemons folder, depending on the definition you want for the job.
  2. Run the launtchctl command to load your job. For example, if you want to load a plist file named codecommit.sync.plist in ~/Library/LaunchAgents, your command might look like this:
launchctl load ~/Library/LaunchAgents/codecommit.sync.plist

 

For Jenkins:

  1. Open Jenkins.
  2. Create a new job as a Freestyle project.

codecommit_replicate_new_project

  1. In the Build Triggers section, select Build periodically, and set up a schedule for the task. Jenkins uses cron expressions to run periodic tasks. For more information, see the Jenkins documentation for the syntax of cron.

If you are replicating a GitHub or BitBucket repository, you can also set the task to build when the Git hook is triggered.

The following example builds once a day between midnight and 1 A.M.

codecommit_replicate_build_triggers

  1. In the Build section, add a build step and choose Execute Windows batch command or Execute Shell. Then write a script and implement the Git operations:
cd /path/to/your/local/repo git pull
git push sync --mirror

Note: Jenkins may require the full path for Git.

The following example is a Windows batch command file, with the full path for Git on the host.

codecommit_replicate_build

  1. Save the configuration for the task.

 

Your AWS CodeCommit replica repository will now be automatically updated with any changes to your source repository as scheduled.

We hope you’ve enjoyed this blog post. If you have questions or suggestions for future blog post, please leave it in the comments below or visit our user forum!

 

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/716356/rss

Security updates have been issued by Arch Linux (curl), CentOS (ipa, kernel, and qemu-kvm), Debian (munin, ruby-zip, and zabbix), Fedora (bind99, gtk-vnc, jenkins, jenkins-remoting, kdelibs, kf5-kio, libcacard, libICE, libXdmcp, and vim), openSUSE (php5), Oracle (kernel), Red Hat (ansible and openshift-ansible and rpm-ostree and rpm-ostree-client), and Ubuntu (munin).

Continuous Deployment to Amazon ECS using AWS CodePipeline, AWS CodeBuild, Amazon ECR, and AWS CloudFormation

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/continuous-deployment-to-amazon-ecs-using-aws-codepipeline-aws-codebuild-amazon-ecr-and-aws-cloudformation/

Thanks to my colleague John Pignata for a great blog on how to create a continuous deployment pipeline to Amazon ECS.

Delivering new iterations of software at a high velocity is a competitive advantage in today’s business environment. The speed at which organizations can deliver innovations to customers and adapt to changing markets is increasingly a pivotal attribute that can make the difference between success and failure.

AWS provides a set of flexible services designed to enable organizations to embrace the combination of cultural philosophies, practices, and tools called DevOps that increases an organization’s ability to deliver applications and services at high velocity.

In this post, I explore the DevOps practice called continuous deployment and outline a reference architecture to implement an automated deployment pipeline for applications delivered as Docker containers onto Amazon ECS using AWS CodePipeline, AWS CodeBuild, and AWS CloudFormation.

What is continuous deployment?

Agility is often cited as a key advantage of cloud computing over the traditional delivery of IT resources. Instead of waiting weeks or months for other departments to provision a new server, developers can create new instances with a click or API call and start using it within minutes. This newfound speed and autonomy frees developers to experiment and deliver new products and features to their customers as quickly as possible.

On top of the cloud, teams are embracing DevOps practices in order to achieve a faster time-to-market, better code quality, and more reliable releases of their products and services. Continuous deployment is a DevOps practice in which new software revisions are automatically built, tested, packaged, and released to production.

Continuous deployment enables developers to ship features and fixes through an entirely automated software release process. Instead of batching up large releases over a period of weeks or months and conducting deployments manually, developers can use automation to deliver versions of their applications many times a day as new software revisions are ready for users. In the same way cloud computing abbreviates the delivery time of resources, continuous deployment reduces the release cycle of new software to your users from weeks or months to minutes.

Embracing this speed and agility has many benefits including:

  • New features and bug fixes are released to users quickly; code sitting in a source code repository does not deliver business value or benefit your customers. By releasing new software revisions as close to immediately as possible, customers start benefiting from your work more quickly and teams can get more focused feedback.
  • Change sets are smaller; large change sets create challenges in pinpointing root causes of issues, bugs, and other regressions. By releasing smaller change sets more frequently, teams can more easily attribute and correct introduced issues.
  • Automated deployment encourages best practices; as any change committed to your source code repository can be deployed immediately via automation, teams have to ensure that changes are well-tested and that their production environments are closely monitored.

How does continuous deployment work?

Continuous deployment is conducted by an automated pipeline that coordinates the activities related to software release and provides visibility into the process. During the process, a releasable artifact is built, tested, packaged, and deployed into a production environment. The releasable artifact might be an executable file, a package of script files, a container, or some other component that ultimately must be delivered to production.

AWS CodePipeline is a continuous delivery and deployment service that coordinates the building, testing, and deployment of your code each time there is a new software revision. CodePipeline provides visible, central orchestration for taking a code change and moving it through a workflow and ultimately into the hands of your users. The pipeline defines stages to retrieve code from a source code repository, build the source code into a releasable artifact, test that artifact, and deliver it to production while ensuring that these stages happen in order and are halted if a failure occurs.

While CodePipeline powers the delivery pipeline and orchestrates the process, it does not have facilities for building or testing the software itself. For these stages, CodePipeline integrates with several other tools, including AWS CodeBuild, which is a fully managed build service. CodeBuild compiles source code, runs tests, and produces software packages that are ready to deploy. That makes it ideal for the build and test stages of a continuous deployment pipeline. Out of the box, CodeBuild has native support for many different kinds of build environments, including building Docker containers.

Containers are a powerful mechanism for software delivery, as they allow for a predictable and reproducible environment and provide a high level of confidence that changes tested in one environment can be successfully deployed. AWS provides several services to run and manage Docker container images. Amazon ECS is a highly scalable and high performance container management service that allows you to run applications on a cluster of Amazon EC2 instances. Amazon ECR is a fully managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.

Finally, CodePipeline integrates with several services to facilitate deployment, including AWS Elastic Beanstalk, AWS CodeDeploy, AWS OpsWorks, and your own custom deployment code or process using AWS Lambda or AWS CloudFormation. These deployment actions can be used to power the final step in your pipeline to push the newly built changes live onto your production environment.

Continuous deployment to Amazon ECS

Here’s a reference architecture that puts these components together to deliver a continuous deployment pipeline of Docker applications onto ECS:

This architecture demonstrates how to deploy containers onto ECS and ECR using CodePipeline to build a fully automated continuous deployment pipeline on top of AWS. This approach to continuous deployment is entirely serverless and uses managed services for the orchestration, build, and deployment of your software.

The pipeline created in the reference architecture looks like the following:

In this post, I discuss each stage in this reference architecture. What happens when a developer changes some copy on a landing page and pushes that change into the source code repository?

First, in the Source stage, the pipeline is configured with details for accessing a source code repository system. In the reference architecture, you have a sample application hosted in a GitHub repository. CodePipeline polls this repository and initiates a new pipeline execution for each new commit. In addition to GitHub, CodePipeline also supports source locations such as a Git repository in AWS CodeCommit or a versioned object stored in Amazon S3. Each new build is retrieved from the source code repository, packaged as a zip file, stored on S3, and sent to the next stage of the pipeline.

The Source stage also defines a template artifact stored on Amazon S3. This is the template that defines the deployment environment used by the deployment stage after a successful build of the application.

The Build stage uses CodeBuild to create a new Docker container image based upon the latest source code and pushes it to an ECR repository. CodePipeline also integrates with a number of third-party build systems, such as Jenkins, CloudBees, Solano CI, and TeamCity.

Finally, the Deploy stage uses CloudFormation to create a new task definition revision that points to the newly built Docker container image and updates the ECS service to use the new task definition revision. After this is done, ECS initiates a deployment by fetching the new Docker container from ECR and restarting the service.

After all of the pipeline’s stages are green, you can reload the application in a web browser and see the developer’s copy changes live in production. This happened automatically without any human invention.

This pipeline is now in production, listening for new code in the source code repository, and ready to ship any future changes that your team pushes into production. It’s also extensible, meaning that new stages can be added to include additional steps. For example, you could include a test stage to execute unit and acceptance tests to ensure the new code revision is safe to deploy to production. After it’s deployed, a notification step could be added to alert your team via email or a Slack channel that a new version is live, along with the details about the change set deployed to production.

Conclusion

We’re excited to see what kinds of applications you can deliver to your users using this approach and how it affects your product development processes. The cloud unlocks massive advantages in agility, and the ability to implement techniques like continuous deployment unlocks a significant competitive advantage.

You’ll find an AWS CloudFormation template with everything necessary to spin up your own continuous deployment pipeline at the AWS Labs EC2 Container Service – Reference Architecture: Continuous Deployment repo on GitHub. If you have any questions, feedback, or suggestions, please let us know!

Ready-to-Run Solutions: Open Source Software in AWS Marketplace

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/ready-to-run-solutions-open-source-software-in-aws-marketplace/

There are lot’s of exciting things going on in the AWS Marketplace. Here to tell you more about open source software in the marketplace are Matthew Freeman and Luis Daniel Soto.

– Ana


According to industry research, enterprise use of open source software (OSS) is on the rise. More and more corporate-based developers are asking to use available OSS libraries as part of ongoing development efforts at work. These individuals may be using OSS in their own projects (i.e. evenings and weekends), and naturally want to bring to work the tools and techniques that help them elsewhere.

Consequently, development organizations in all sectors are examining the case for using open source software for applications within their own IT infrastructures as well as in the software they sell. In this Overview, we’ll show you why obtaining your open source software through AWS makes sense from a development and fiscal perspective.

Open Source Development Process
Because open source software is generally developed in independent communities of participants, acquiring and managing software versions is usually done through online code repositories. With code coming from disparate sources, it can be challenging to get the code libraries and development tools to work well together. But AWS Marketplace lets you skip this process and directly launch EC2 instances with the OSS you want. AWS Marketplace also has distributions of Linux that you can use as the foundation for your OSS solution.

Preconfigured Stacks Give You an Advantage
While we may take this 1-Click launch ability for granted with commercial software, for OSS, having preconfigured AMIs is a huge advantage. AWS Marketplace gives software companies that produce combinations or “stacks” of the most popular open source software a location from which these stacks can be launched into the AWS cloud. Companies such as TurnKey and Bitnami use their OSS experts to configure and optimize these code stacks so that the software works well together. These companies stay current with new releases of the OSS, and update their stacks accordingly as soon as new versions are available. Some of these companies also offer cloud hosting infrastructures as a paid service to make it even easier to launch and manage cloud-based servers.

As an example, one of the most popular combinations of open source software is the LAMP stack, which consists of a Linux distribution, Apache Web Server, a MySQL database, and the PHP programming library. You can select a generic LAMP stack based on the Linux distribution you prefer, then install your favorite development tools and libraries.

You would then add to it any adjustments to the underlying software that you need or want to make for your application to run as expected. For example, you may want to change the memory allocations for the application, or change the maximum file upload size in the PHP settings.

You could select an OSS application stack that contains the LAMP elements plus a single application such as WordPress, Moodle, or Joomla!®. These stacks would be configured by the vendor with optimal settings for that individual application so that it runs smoothly, with sufficient memory and disk allocations based on the application requirements. This is where stack vendors excel in providing added value to the basic software provisioning.

You might instead choose a generic LAMP stack because you need to combine multiple applications on a single server that use common components. For example, WordPress has plugins that allow it to interoperate with Moodle directly. Both applications use Apache Web Server, PHP, and MySQL. You save time by starting with the LAMP stack, and configuring the components individually as needed for WordPress and Moodle to work well together.

These are just 2 real-world examples of how you could use a preconfigured solution from AWS Marketplace and adapt it to your own needs.

OSS in AWS Marketplace
AWS Marketplace is one of the largest sites for obtaining and deploying OSS tools, applications, and servers. Here are some of the other categories in which OSS is available.

  • Application Development and Test Tools. You can find on AWS Marketplace solutions and CloudFormation templates for EC2 servers configured with application frameworks such as Zend, ColdFusion, Ruby on Rails, and Node.js. You’ll also find popular OSS choices for development and testing tools, supporting agile software development with key product such as Jenkins for test automation, Bugzilla for issue tracking, Subversion for source code management and configuration management tools. Learn more »
  • Infrastructure Software. The successful maintenance and protection of your network is critical to your business success. OSS libraries such as OpenLDAP and OpenVPN make it possible to launch a cloud infrastructure to accompany or entirely replace an on-premises network. From offerings dedicated to handling networking and security processing to security-hardened individual servers, AWS Marketplace has numerous security solutions available to assist you in meeting the security requirements for different workloads. Learn more »
  • Database and Business Intelligence. Including OSS database, data management and open data catalog solutions. Business Intelligence and advanced analytics software can help you make sense of the data coming from transactional systems, sensors, cell phones, and a whole range of Internet-connected devices. Learn more »
  • Business Software. Availability, agility, and flexibility are key to running business applications in the cloud. Companies of all sizes want to simplify infrastructure management, deploy more quickly, lower cost, and increase revenue. Business Software running on Linux provides these key metrics. Learn more »
  • Operating Systems. AWS Marketplace has a wide variety of operating systems from FreeBSD, minimal and security hardened Linux installations to specialized distributions for security and scientific work. Learn more »

How to Get Started with OSS on AWS Marketplace
Begin by identifying the combination of software you want, and enter keywords in the Search box at the top of the AWS Marketplace home screen to find suitable offerings.

Or if you want to browse by category, just click “Shop All Categories” and select from the list.

Once you’ve made your initial search or selection, there are nearly a dozen ways to filter the results until the best candidates remain. For example, you can select your preferred Linux distribution by expanding the All Linux filter to help you find the solutions that run on that distribution. You can also filter for Free Trials, Software Pricing Plans, EC2 Instance Types, AWS Region, Average Rating, and so on.

Click on the title of the listing to see the details of that offering, including pricing, regions, product support, and links to the seller’s website. When you’ve made your selections, and you’re ready to launch the instance, click Continue, and log into your account.

Because you log in, AWS Marketplace can detect the presence of existing security groups, key pairs, and VPC settings. Make adjustments on the Launch on EC2 page, then click Accept Software Terms & Launch with 1-Click, and your instance will launch immediately.

If you prefer you can do a Manual Launch using the AWS Console with the selection you’ve made, or start the instance using the API or command line interface (CLI). Either way, your EC2 instance is up and running within minutes.

Flexibility with Pay-As-You-Go Pricing
You pay Amazon EC2 usage costs plus per hour (or per month or annual) and, if applicable, commercial open source software fees directly through your AWS account. As a result, using AWS Marketplace is one of the fastest and easiest ways to get your OSS software up and running.

Visit http://aws.amazon.com/mp/oss to learn more about open source software on AWS Marketplace.

Matthew Freeman, Category Development Lead, AWS Marketplace
Luis Daniel Soto, Sr. Category GTM Leader, AWS Marketplace

Security updates for Friday

Post Syndicated from jake original http://lwn.net/Articles/707996/rss

Arch Linux has updated firefox
(two vulnerabilities) and thunderbird (code execution).

CentOS has updated thunderbird (C6; C5: code execution).

Debian-LTS has updated firefox-esr (multiple vulnerabilities), imagemagick (multiple vulnerabilities, many from 2014 and 2015), monit (cross-site request forgery), tomcat6 (multiple vulnerabilities), and tomcat7 (multiple vulnerabilities).

Fedora has updated calamares (F25; F24:
encryption bypass), jenkins (F25: code execution), jenkins-remoting (F25: code execution), moin (F25; F24; F23: cross-site scripting flaws), mujs (F23: multiple vulnerabilities), and zathura-pdf-mupdf (F23: multiple vulnerabilities).

Gentoo has updated davfs2
(privilege escalation from 2013) and gnupg
(flawed random number generation).

openSUSE has updated libtcnative-1-0 (42.2, 42.1: SSL improvements)
and pacemaker (42.2: two vulnerabilities).

Oracle has updated firefox (OL7; OL6; OL5: code execution).

Red Hat has updated firefox (code
execution).

SUSE has updated kernel (SLE11: multiple vulnerabilities, some from 2013 and 2015)
and ImageMagick
(SLE11: multiple vulnerabilities, some from 2014 and 2015).

Ubuntu has updated ghostscript
(multiple vulnerabilities, one from 2013) and oxide-qt (16.10,
16.04, 14.04: multiple vulnerabilities).

Security advisories for Wednesday

Post Syndicated from ris original http://lwn.net/Articles/707695/rss

Arch Linux has updated neovim (code execution).

Debian has updated hdf5 (multiple vulnerabilities).

Fedora has updated drupal7 (F25; F24; F23: multiple vulnerabilities), p7zip (F25: denial of service), teeworlds (F25; F24; F23: code execution), and vagrant (F25; F24; F23: nfs export insertion).

Mageia has updated jenkins-remoting (code execution) and teeworlds (code execution).

Oracle has updated thunderbird (OL7; OL6: multiple vulnerabilities).

SUSE has updated vim (SLE12-SP2; SLE11-SP4: code execution).

AWS Week in Review – October 31, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-october-31-2016/

Over 25 internal and external contributors helped out with pull requests and fresh content this week! Thank you all for your help and your support.

Monday

October 31

Tuesday

November 1

Wednesday

November 2

Thursday

November 3

Friday

November 4

Saturday

November 5

Sunday

November 6

New & Notable Open Source

New Customer Success Stories

  • Apposphere – Using AWS and bitfusion.io from the AWS Marketplace, Apposphere can scale 50 to 60 percent month-over-month while keeping customer satisfaction high. Based in Austin, Texas, the Apposphere mobile app delivers real-time leads from social media channels.
  • CADFEM – CADFEM uses AWS to make complex simulation software more accessible to smaller engineering firms, helping them compete with much larger ones. The firm specializes in simulation software and services for the engineering industry.
  • Mambu – Using AWS, Mambu helped one of its customers launch the United Kingdom’s first cloud-based bank, and the company is now on track for tenfold growth, giving it a competitive edge in the fast-growing fintech sector. Mambu is an all-in-one SaaS banking platform for managing credit and deposit products quickly, simply, and affordably.
  • Okta – Okta uses AWS to get new services into production in days instead of weeks. Okta creates products that use identity information to grant people access to applications on multiple devices at any time, while still enforcing strong security protections.
  • PayPlug – PayPlug is a startup created in 2013 that developed an online payment solution. It differentiates itself by the simplicity of its services and its ease of integration on e-commerce websites. PayPlug is a startup created in 2013 that developed an online payment solution. It differentiates itself by the simplicity of its services and its ease of integration on e-commerce websites
  • Rent-a-Center – Rent-a-Center is a leading renter of furniture, appliances, and electronics to customers in the United States, Canada, Puerto Rico, and Mexico. Rent-A-Center uses AWS to manage its new e-commerce website, scale to support a 1,000 percent spike in site traffic, and enable a DevOps approach.
  • UK Ministry of Justice – By going all in on the AWS Cloud, the UK Ministry of Justice (MoJ) can use technology to enhance the effectiveness and fairness of the services it provides to British citizens. The MoJ is a ministerial department of the UK government. MoJ had its own on-premises data center, but lacked the ability to change and adapt rapidly to the needs of its citizens. As it created more digital services, MoJ turned to AWS to automate, consolidate, and deliver constituent services.

New SlideShare Presentations

New YouTube Videos

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Building End-to-End Continuous Delivery and Deployment Pipelines in AWS and TeamCity

Post Syndicated from Balaji Iyer original https://aws.amazon.com/blogs/devops/building-end-to-end-continuous-delivery-and-deployment-pipelines-in-aws-and-teamcity/

By Balaji Iyer, Janisha Anand, and Frank Li

Organizations that transform their applications to cloud-optimized architectures need a seamless, end-to-end continuous delivery and deployment workflow: from source code, to build, to deployment, to software delivery.

Continuous delivery is a DevOps software development practice where code changes are automatically built, tested, and prepared for a release to production. The practice expands on continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is implemented properly, developers will always have a deployment-ready build artifact that has undergone a standardized test process.

Continuous deployment is the process of deploying application revisions to a production environment automatically, without explicit approval from a developer. This process makes the entire software release process automated. Features are released as soon as they are ready, providing maximum value to customers.

These two techniques enable development teams to deploy software rapidly, repeatedly, and reliably.

In this post, we will build an end-to-end continuous deployment and delivery pipeline using AWS CodePipeline (a fully managed continuous delivery service), AWS CodeDeploy (an automated application deployment service), and TeamCity’s AWS CodePipeline plugin. We will use AWS CloudFormation to setup and configure the end-to-end infrastructure and application stacks. The ­­pipeline pulls source code from an Amazon S3 bucket, an AWS CodeCommit repository, or a GitHub repository. The source code will then be built and tested using TeamCity’s continuous integration server. Then AWS CodeDeploy will deploy the compiled and tested code to Amazon EC2 instances.

Prerequisites

You’ll need an AWS account, an Amazon EC2 key pair, and administrator-level permissions for AWS Identity and Access Management (IAM), AWS CloudFormation, AWS CodeDeploy, AWS CodePipeline, Amazon EC2, and Amazon S3.

Overview

Here are the steps:

  1. Continuous integration server setup using TeamCity.
  2. Continuous deployment using AWS CodeDeploy.
  3. Building a delivery pipeline using AWS CodePipeline.

In less than an hour, you’ll have an end-to-end, fully-automated continuous integration, continuous deployment, and delivery pipeline for your application. Let’s get started!

1. Continuous integration server setup using TeamCity

Click here on this button launch-stack to launch an AWS CloudFormation stack to set up a TeamCity server. If you’re not already signed in to the AWS Management Console, you will be prompted to enter your AWS credentials. This stack provides an automated way to set up a TeamCity server based on the instructions here. You can download the template used for this setup from here.

The CloudFormation template does the following:

  1. Installs and configures the TeamCity server and its dependencies in Linux.
  2. Installs the AWS CodePipeline plugin for TeamCity.
  3. Installs a sample application with build configurations.
  4. Installs PHP meta-runners required to build the sample application.
  5. Redirects TeamCity port 8111 to 80.

Choose the AWS region where the TeamCity server will be hosted. For this demo, choose US East (N. Virginia).

Select a region

On the Select Template page, choose Next.

On the Specify Details page, do the following:

  1. In Stack name, enter a name for the stack. The name must be unique in the region in which you are creating the stack.
  2. In InstanceType, choose the instance type that best fits your requirements. The default value is t2.medium.

Note: The default instance type exceeds what’s included in the AWS Free Tier. If you use t2.medium, there will be charges to your account. The cost will depend on how long you keep the CloudFormation stack and its resources.

  1. In KeyName, choose the name of your Amazon EC2 key pair.
  2. In SSHLocation, enter the IP address range that can be used to connect through SSH to the EC2 instance. SSH and HTTP access is limited to this IP address range.

Note: You can use checkip.amazonaws.com or whatsmyip.org to find your IP address. Remember to add /32 to any single domain or, if you are representing a larger IP address space, use the correct CIDR block notation.

Specify Details

Choose Next.

Although it’s optional, on the Options page, type TeamCityServer for the instance name. This is the name used in the CloudFormation template for the stack. It’s a best practice to name your instance, because it makes it easier to identify or modify resources later on.

Choose Next.

On the Review page, choose Create button. It will take several minutes for AWS CloudFormation to create the resources for you.

Review

When the stack has been created, you will see a CREATE_COMPLETE message on the Overview tab in the Status column.

Events

You have now successfully created a TeamCity server. To access the server, on the EC2 Instance page, choose Public IP for the TeamCityServer instance.

Public DNS

On the TeamCity First Start page, choose Proceed.

TeamCity First Start

Although an internal database based on the HSQLDB database engine can be used for evaluation, TeamCity strongly recommends that you use an external database as a back-end TeamCity database in a production environment. An external database provides better performance and reliability. For more information, see the TeamCity documentation.

On the Database connection setup page, choose Proceed.

Database connection setup

The TeamCity server will start, which can take several minutes.

TeamCity is starting

Review and Accept the TeamCity License Agreement, and then choose Continue.

Next, create an Administrator account. Type a user name and password, and then choose Create Account.

You can navigate to the demo project from Projects in the top-left corner.

Projects

Note: You can create a project from a repository URL (the option used in this demo), or you can connect to your managed Git repositories, such as GitHub or BitBucket. The demo app used in this example can be found here.

We have already created a sample project configuration. Under Build, choose Edit Settings, and then review the settings.

Demo App

Choose Build Step: PHP – PHPUnit.

Build Step

The fields on the Build Step page are already configured.

Build Step

Choose Run to start the build.

Run Test

To review the tests that are run as part of the build, choose Tests.

Build

Build

You can view any build errors by choosing Build log from the same drop-down list.

Now that we have a successful build, we will use AWS CodeDeploy to set up a continuous deployment pipeline.

2. Continuous deployment using AWS CodeDeploy

Click here on this button launch-stack to launch an AWS CloudFormation stack that will use AWS CodeDeploy to set up a sample deployment pipeline. If you’re not already signed in to the AWS Management Console, you will be prompted to enter your AWS credentials.

You can download the master template used for this setup from here. The template nests two CloudFormation templates to execute all dependent stacks cohesively.

  1. Template 1 creates a fleet of up to three EC2 instances (with a base operating system of Windows or Linux), associates an instance profile, and installs the AWS CodeDeploy agent. The CloudFormation template can be downloaded from here.
  2. Template 2 creates an AWS CodeDeploy deployment group and then installs a sample application. The CloudFormation template can be downloaded from here.

Choose the same AWS region you used when you created the TeamCity server (US East (N. Virginia)).

Note: The templates contain Amazon Machine Image (AMI) mappings for us-east-1, us-west-2, eu-west-1, and ap-southeast-2 only.

On the Select Template page, choose Next.

Picture17

On the Specify Details page, in Stack name, type a name for the stack. In the Parameters section, do the following:

  1. In AppName, you can use the default, or you can type a name of your choice. The name must be between 2 and 15 characters long. It can contain lowercase and alphanumeric characters, hyphens (-), and periods (.), but the name must start with an alphanumeric character.
  1. In DeploymentGroupName, you can use the default, or you type a name of your choice. The name must be between 2 and 25 characters long. It can contain lowercase and alphanumeric characters, hyphens (-), and periods (.), but the name must start with an alphanumeric character.

Picture18

  1. In InstanceType, choose the instance type that best fits the requirements of your application.
  2. In InstanceCount, type the number of EC2 instances (up to three) that will be part of the deployment group.
  3. For Operating System, choose Linux or Windows.
  4. Leave TagKey and TagValue at their defaults. AWS CodeDeploy will use this tag key and value to locate the instances during deployments. For information about Amazon EC2 instance tags, see Working with Tags Using the Console.Picture19
  5. In S3Bucket and S3Key, type the bucket name and S3 key where the application is located. The default points to a sample application that will be deployed to instances in the deployment group. Based on what you selected in the OperatingSystem field, use the following values.
    Linux:
    S3Bucket: aws-codedeploy
    S3Key: samples/latest/SampleApp_Linux.zip
    Windows:
    S3Bucket: aws-codedeploy
    S3Key: samples/latest/SampleApp_Windows.zip
  1. In KeyName, choose the name of your Amazon EC2 key pair.
  2. In SSHLocation, enter the IP address range that can be used to connect through SSH/RDP to the EC2 instance.

Picture20

Note: You can use checkip.amazonaws.com or whatsmyip.org to find your IP address. Remember to add /32 to any single domain or, if you are representing a larger IP address space, use the correct CIDR block notation.

Follow the prompts, and then choose Next.

On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box. Review the other settings, and then choose Create.

Picture21

It will take several minutes for CloudFormation to create all of the resources on your behalf. The nested stacks will be launched sequentially. You can view progress messages on the Events tab in the AWS CloudFormation console.

Picture22

You can see the newly created application and deployment groups in the AWS CodeDeploy console.

Picture23

To verify that your application was deployed successfully, navigate to the DNS address of one of the instances.

Picture24

Picture25

Now that we have successfully created a deployment pipeline, let’s integrate it with AWS CodePipeline.

3. Building a delivery pipeline using AWS CodePipeline

We will now create a delivery pipeline in AWS CodePipeline with the TeamCity AWS CodePipeline plugin.

  1. Using AWS CodePipeline, we will build a new pipeline with Source and Deploy stages.
  2. Create a custom action for the TeamCity Build stage.
  3. Create an AWS CodePipeline action trigger in TeamCity.
  4. Create a Build stage in the delivery pipeline for TeamCity.
  5. Publish the build artifact for deployment.

Step 1: Build a new pipeline with Source and Deploy stages using AWS CodePipeline.

In this step, we will create an Amazon S3 bucket to use as the artifact store for this pipeline.

  1. Install and configure the AWS CLI.
  1. Create an Amazon S3 bucket that will host the build artifact. Replace account-number with your AWS account number in the following steps.
    $ aws s3 mb s3://demo-app-build-account-number
  1. Enable bucket versioning
    $ aws s3api put-bucket-versioning --bucket demo-app-build-account-number --versioning-configuration Status=Enabled
  1. Download the sample build artifact and upload it to the Amazon S3 bucket created in step 2.
  • OSX/Linux:
    $ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/Sample_Linux_App.zip | aws s3 cp - s3://demo-app-build-account-number
  • Windows:
    $ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/Sample_Windows_App.zip
    $ aws s3 cp ./Sample_Windows_App.zip s3://demo-app-account-number

Note: You can use AWS CloudFormation to perform these steps in an automated way. When you choose launch-stack, this template will be used. Use the following commands to extract the Amazon S3 bucket name, enable versioning on the bucket, and copy over the sample artifact.

$ export bucket-name ="$(aws cloudformation describe-stacks --stack-name “S3BucketStack” --output text --query 'Stacks[0].Outputs[?OutputKey==`S3BucketName`].OutputValue')"
$ aws s3api put-bucket-versioning --bucket $bucket-name --versioning-configuration Status=Enabled && wget https://s3.amazonaws.com/teamcity-demo-app/Sample_Linux_App.zip && aws s3 cp ./Sample_Linux_App.zip s3://$bucket-name

You can create a pipeline by using a CloudFormation stack or the AWS CodePipeline console.

Option 1: Use AWS CloudFormation to create a pipeline

We’re going to create a two-stage pipeline that uses a versioned Amazon S3 bucket and AWS CodeDeploy to release a sample application. (You can use an AWS CodeCommit repository or a GitHub repository as the source provider instead of Amazon S3.)

Click here on this button launch-stack to launch an AWS CloudFormation stack to set up a new delivery pipeline using the application and deployment group created in an earlier step. If you’re not already signed in to the AWS Management Console, you will be prompted to enter your AWS credentials.

Choose the US East (N. Virginia) region, and then choose Next.

Leave the default options, and then choose Next.

Picture26

On the Options page, choose Next.

Picture27

Select the I acknowledge that AWS CloudFormation might create IAM resources check box, and then choose Create. This will create the delivery pipeline in AWS CodePipeline.

Option 2: Use the AWS CodePipeline console to create a pipeline

On the Create pipeline page, in Pipeline name, type a name for your pipeline, and then choose Next step.
Picture28

Depending on where your source code is located, you can choose Amazon S3, AWS CodeCommit, or GitHub as your Source provider. The pipeline will be triggered automatically upon every check-in to your GitHub or AWS CodeCommit repository or when an artifact is published into the S3 bucket. In this example, we will be accessing the product binaries from an Amazon S3 bucket.

Choose Next step.Picture29

s3://demo-app-build-account-number/Sample_Linux_App.zip (or) Sample_Windows_App.zip

Note: AWS CodePipeline requires a versioned S3 bucket for source artifacts. Enable versioning for the S3 bucket where the source artifacts will be located.

On the Build page, choose No Build. We will update the build provider information later on.Picture31

For Deployment provider, choose CodeDeploy. For Application name and Deployment group, choose the application and deployment group we created in the deployment pipeline step, and then choose Next step.Picture32

An IAM role will provide the permissions required for AWS CodePipeline to perform the build actions and service calls.  If you already have a role you want to use with the pipeline, choose it on the AWS Service Role page. Otherwise, type a name for your role, and then choose Create role.  Review the predefined permissions, and then choose Allow. Then choose Next step.

 

For information about AWS CodePipeline access permissions, see the AWS CodePipeline Access Permissions Reference.

Picture34

Review your pipeline, and then choose Create pipeline

Picture35

This will trigger AWS CodePipeline to execute the Source and Beta steps. The source artifact will be deployed to the AWS CodeDeploy deployment groups.

Picture36

Now you can access the same DNS address of the AWS CodeDeploy instance to see the updated deployment. You will see the background color has changed to green and the page text has been updated.

Picture37

We have now successfully created a delivery pipeline with two stages and integrated the deployment with AWS CodeDeploy. Now let’s integrate the Build stage with TeamCity.

Step 2: Create a custom action for TeamCity Build stage

AWS CodePipeline includes a number of actions that help you configure build, test, and deployment resources for your automated release process. TeamCity is not included in the default actions, so we will create a custom action and then include it in our delivery pipeline. TeamCity’s CodePipeline plugin will also create a job worker that will poll AWS CodePipeline for job requests for this custom action, execute the job, and return the status result to AWS CodePipeline.

TeamCity’s custom action type (Build/Test categories) can be integrated with AWS CodePipeline. It’s similar to Jenkins and Solano CI custom actions. TeamCity’s CodePipeline plugin will also create a job worker that will poll AWS CodePipeline for job requests for this custom action, execute the job, and return the status result to AWS CodePipeline.

The TeamCity AWS CodePipeline plugin is already installed on the TeamCity server we set up earlier. To learn more about installing TeamCity plugins, see install the plugin. We will now create a custom action to integrate TeamCity with AWS CodePipeline using a custom-action JSON file.

Download this file locally: https://github.com/JetBrains/teamcity-aws-codepipeline-plugin/blob/master/custom-action.json

Open a terminal session (Linux, OS X, Unix) or command prompt (Windows) on a computer where you have installed the AWS CLI. For information about setting up the AWS CLI, see here.

Use the AWS CLI to run the aws codepipeline create-custom-action-type command, specifying the name of the JSON file you just created.

For example, to create a build custom action:

$ aws codepipeline create-custom-action-type --cli-input-json file://teamcity-custom-action.json

This should result in an output similar to this:

{
    "actionType": {
        "inputArtifactDetails": {
            "maximumCount": 5,
            "minimumCount": 0
        },
        "actionConfigurationProperties": [
            {
                "description": "The expected URL format is http[s]://host[:port]",
                "required": true,
                "secret": false,
                "key": true,
                "queryable": false,
                "name": "TeamCityServerURL"
            },
            {
                "description": "Corresponding TeamCity build configuration external ID",
                "required": true,
                "secret": false,
                "key": true,
                "queryable": false,
                "name": "BuildConfigurationID"
            },
            {
                "description": "Must be unique, match the corresponding field in the TeamCity build trigger settings, satisfy regular expression pattern: [a-zA-Z0-9_-]+] and have length <= 20",
                "required": true,
                "secret": false,
                "key": true,
                "queryable": true,
                "name": "ActionID"
            }
        ],
        "outputArtifactDetails": {
            "maximumCount": 5,
            "minimumCount": 0
        },
        "id": {
            "category": "Build",
            "owner": "Custom",
            "version": "1",
            "provider": "TeamCity"
        },
        "settings": {
            "entityUrlTemplate": "{Config:TeamCityServerURL}/viewType.html?buildTypeId={Config:BuildConfigurationID}",
            "executionUrlTemplate": "{Config:TeamCityServerURL}/viewLog.html?buildId={ExternalExecutionId}&tab=buildResultsDiv"
        }
    }
}

Before you add the custom action to your delivery pipeline, make the following changes to the TeamCity build server. You can access the server by opening the Public IP of the TeamCityServer instance from the EC2 Instance page.

Picture38

In TeamCity, choose Projects. Under Build Configuration Settings, choose Version Control Settings. You need to remove the version control trigger here so that the TeamCity build server will be triggered during the Source stage in AWS CodePipeline. Choose Detach.

Picture39

Step 3: Create a new AWS CodePipeline action trigger in TeamCity

Now add a new AWS CodePipeline trigger in your build configuration. Choose Triggers, and then choose Add new trigger

Picture40

From the drop-down menu, choose AWS CodePipeline Action.

Picture41

 

In the AWS CodePipeline console, choose the region in which you created your delivery pipeline. Enter your access key credentials, and for Action ID, type a unique name. You will need this ID when you add a TeamCity Build stage to the pipeline.

Picture42

Step 4: Create a new Build stage in the delivery pipeline for TeamCity

Add a stage to the pipeline and name it Build.

Picture43

From the drop-down menu, choose Build. In Action name, type a name for the action. In Build provider, choose TeamCity, and then choose Add action.

Select TeamCity, click Add action

Picture44

For TeamCity Action Configuration, use the following:

TeamCityServerURL:  http://<Public DNS address of the TeamCity build server>[:port]

Picture45

BuildConfigurationID: In your TeamCity project, choose Build. You’ll find this ID (AwsDemoPhpSimpleApp_Build) under Build Configuration Settings.

Picture46

ActionID: In your TeamCity project, choose Build. You’ll find this ID under Build Configuration Settings. Choose Triggers, and then choose AWS CodePipeline Action.

Picture47

Next, choose input and output artifacts for the Build stage, and then choose Add action.

Picture48

We will now publish a new artifact to the Amazon S3 artifact bucket we created earlier, so we can see the deployment of a new app and its progress through the delivery pipeline. The demo app used in this artifact can be found here for Linux or here for Windows.

Download the sample build artifact and upload it to the Amazon S3 bucket created in step 2.

OSX/Linux:

$ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/PhpArtifact.zip | aws s3 cp - s3://demo-app-build-account-number

Windows:

$ wget -qO- https://s3.amazonaws.com/teamcity-demo-app/WindowsArtifact.zip
$ aws s3 cp ./WindowsArtifact.zip s3://demo-app-account-number

From the AWS CodePipeline dashboard, under delivery-pipeline, choose Edit.Picture49

Edit Source stage by choosing the edit icon on the right.

Amazon S3 location:

Linux: s3://demo-app-account-number/PhpArtifact.zip

Windows: s3://demo-app-account-number/WindowsArtifact.zip

Under Output artifacts, make sure My App is displayed for Output artifact #1. This will be the input artifact for the Build stage.Picture50

The output artifact of the Build stage should be the input artifact of the Beta deployment stage (in this case, MyAppBuild).Picture51

Choose Update, and then choose Save pipeline changes. On the next page, choose Save and continue.

Step 5: Publish the build artifact for deploymentPicture52

Step (a)

In TeamCity, on the Build Steps page, for Runner type, choose Command Line, and then add the following custom script to copy the source artifact to the TeamCity build checkout directory.

Note: This step is required only if your AWS CodePipeline source provider is either AWS CodeCommit or Amazon S3. If your source provider is GitHub, this step is redundant, because the artifact is copied over automatically by the TeamCity AWS CodePipeline plugin.

In Step name, enter a name for the Command Line runner to easily distinguish the context of the step.

Syntax:

$ cp -R %codepipeline.artifact.input.folder%/<CodePipeline-Name>/<build-input-artifact-name>/* % teamcity.build.checkoutDir%
$ unzip *.zip -d %teamcity.build.checkoutDir%
$ rm –rf %teamcity.build.checkoutDir%/*.zip

For Custom script, use the following commands:

cp -R %codepipeline.artifact.input.folder%/delivery-pipeline/MyApp/* %teamcity.build.checkoutDir%
unzip *.zip -d %teamcity.build.checkoutDir%
rm –rf %teamcity.build.checkoutDir%/*.zip

Picture53

Step (b):

For Runner type, choose Command Line runner type, and then add the following custom script to copy the build artifact to the output folder.

For Step name, enter a name for the Command Line runner.

Syntax:

$ mkdir -p %codepipeline.artifact.output.folder%/<CodePipeline-Name>/<build-output-artifact-name>/
$ cp -R %codepipeline.artifact.input.folder%/<CodePipeline-Name>/<build-input-artifact-name>/* %codepipeline.artifact.output.folder%/<CodePipeline-Name/<build-output-artifact-name>/

For Custom script, use the following commands:

$ mkdir -p %codepipeline.artifact.output.folder%/delivery-pipeline/MyAppBuild/
$ cp -R %codepipeline.artifact.input.folder%/delivery-pipeline/MyApp/* %codepipeline.artifact.output.folder%/delivery-pipeline/MyAppBuild/

CopyToOutputFolderIn Build Steps, choose Reorder build steps to ensure the copying of the source artifact step is executed before the PHP – PHP Unit step.Picture54

Drag and drop Copy Source Artifact To Build Checkout Directory to make it the first build step, and then choose Apply.Picture55

Navigate to the AWS CodePipeline console. Choose the delivery pipeline, and then choose Release change. When prompted, choose Release.

Choose Release on the next prompt.

Picture56

The most recent change will run through the pipeline again. It might take a few moments before the status of the run is displayed in the pipeline view.

Here is what you’d see after AWS CodePipeline runs through all of the stages in the pipeline:Picture57

Let’s access one of the instances to see the new application deployment on the EC2 Instance page.Picture58

If your base operating system is Windows, accessing the public DNS address of one of the AWS CodeDeploy instances will result in the following page.

Windows: http://public-dns/Picture59

If your base operating system is Linux, when we access the public DNS address of one of the AWS CodeDeploy instances, we will see the following test page, which is the sample application.

Linux: http://public-dns/www/index.phpPicture60

Congratulations! You’ve created an end-to-end deployment and delivery pipeline ─ from source code, to build, to deployment ─ in a fully automated way.

Summary:

In this post, you learned how to build an end-to-end delivery and deployment pipeline on AWS. Specifically, you learned how to build an end-to-end, fully automated, continuous integration, continuous deployment, and delivery pipeline for your application, at scale, using AWS deployment and management services. You also learned how AWS CodePipeline can be easily extended through the use of custom triggers to integrate other services like TeamCity.

If you have questions or suggestions, please leave a comment below.

AWS Week in Review – September 19, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-september-19-2016/

Eighteen (18) external and internal contributors worked together to create this edition of the AWS Week in Review. If you would like to join the party (with the possibility of a free lunch at re:Invent), please visit the AWS Week in Review on GitHub.

Monday

September 19

Tuesday

September 20

Wednesday

September 21

Thursday

September 22

Friday

September 23

Saturday

September 24

Sunday

September 25

New & Notable Open Source

  • ecs-refarch-cloudformation is reference architecture for deploying Microservices with Amazon ECS, AWS CloudFormation (YAML), and an Application Load Balancer.
  • rclone syncs files and directories to and from S3 and many other cloud storage providers.
  • Syncany is an open source cloud storage and filesharing application.
  • chalice-transmogrify is an AWS Lambda Python Microservice that transforms arbitrary XML/RSS to JSON.
  • amp-validator is a serverless AMP HTML Validator Microservice for AWS Lambda.
  • ecs-pilot is a simple tool for managing AWS ECS.
  • vman is an object version manager for AWS S3 buckets.
  • aws-codedeploy-linux is a demo of how to use CodeDeploy and CodePipeline with AWS.
  • autospotting is a tool for automatically replacing EC2 instances in AWS AutoScaling groups with compatible instances requested on the EC2 Spot Market.
  • shep is a framework for building APIs using AWS API Gateway and Lambda.

New SlideShare Presentations

New Customer Success Stories

  • NetSeer significantly reduces costs, improves the reliability of its real-time ad-bidding cluster, and delivers 100-millisecond response times using AWS. The company offers online solutions that help advertisers and publishers match search queries and web content to relevant ads. NetSeer runs its bidding cluster on AWS, taking advantage of Amazon EC2 Spot Fleet Instances.
  • New York Public Library revamped its fractured IT environment—which had older technology and legacy computing—to a modernized platform on AWS. The New York Public Library has been a provider of free books, information, ideas, and education for more than 17 million patrons a year. Using Amazon EC2, Elastic Load Balancer, Amazon RDS and Auto Scaling, NYPL is able to build scalable, repeatable systems quickly at a fraction of the cost.
  • MakerBot uses AWS to understand what its customers need, and to go to market faster with new and innovative products. MakerBot is a desktop 3-D printing company with more than 100 thousand customers using its 3-D printers. MakerBot uses Matillion ETL for Amazon Redshift to process data from a variety of sources in a fast and cost-effective way.
  • University of Maryland, College Park uses the AWS cloud to create a stable, secure and modern technical environment for its students and staff while ensuring compliance. The University of Maryland is a public research university located in the city of College Park, Maryland, and is the flagship institution of the University System of Maryland. The university uses AWS to migrate all of their datacenters to the cloud, as well as Amazon WorkSpaces to give students access to software anytime, anywhere and with any device.

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Authenticating Amazon ECR Repositories for Docker CLI with Credential Helper

Post Syndicated from Daniele Stroppa original https://aws.amazon.com/blogs/compute/authenticating-amazon-ecr-repositories-for-docker-cli-with-credential-helper/

This is a guest post from my colleagues Ryosuke Iwanaga and Prahlad Rao.

————————

Developers building and managing microservices and containerized applications using Docker containers require a secure, scalable repository to store and manage Docker images. In order to securely access the repository, proper authentication from the Docker client to the repository is important, but re-authenticating or refreshing authentication token every few hours often can be cumbersome.

This post walks you through a quick overview of Amazon ECR and how deploying Amazon ECR Docker Credential Helper can automate authentication token refresh on Docker push/pull requests.

Overview of Amazon ECS and Amazon ECR
Amazon ECS is a highly scalable, fast container management service that makes it easy to run and manage Docker containers on a cluster of Amazon EC2 instances and eliminates the need to operate your own cluster management or worry about scaling management infrastructure.

In order to reliably store Docker images on AWS, ECR provides a managed Docker registry service that is secure, scalable, and reliable. ECR is a private Docker repository with resource-based permissions using IAM so that users or EC2 instances can access repositories and images through the Docker CLI to push, pull, and manage images.

Manual ECR authentication with the Docker CLI
Most commonly, developers use Docker CLI to push and pull images or automate as part of a CI/CD workflow. Because Docker CLI does not support standard AWS authentication methods, client authentication must be handled so that ECR knows who is requesting to push or pull an image.

This can be done with a docker login command to authenticate to an ECR registry that provides an authorization token valid for 12 hours. One of the reasons for the 12-hour validity and subsequent necessary token refresh is that the Docker credentials are stored in a plain-text file and can be accessed if the system is compromised, which essentially gives access to the images. Authenticating every 12 hours ensures appropriate token rotation to protect against misuse.

If you’re using the AWS CLI, you can use a simpler get-login command which retrieves the token, decodes it, and converts into a docker login command for you. An example for the default registry associated with the account is shown below:


$ aws ecr get-login
docker login –u AWS –p password –e none https://aws_account_id.dkr.ecr.us-east-1.amazonaws.com

To access other account registries, use the -registry-ids <aws_account_id> option.

As you can see, the resulting output is a docker login command that you can use to authenticate your Docker client to your ECR registry. The generated token is valid for 12 hours, which means developers running and managing container images have to re-authenticate every 12 hours manually, or script it to generate a new token, which can be somewhat cumbersome in a CI/CD environment. For example if you’re using Jenkins to build and push docker images to ECR, you have to set up Jenkins instances to re-authenticate using get-login to ECR every 12 hours.

If you want a programmatic approach, you can use GetAuthorizationToken from the AWS SDK to fetch credentials for Docker. GetAuthorizationToken returns an authorization token of a base64-encoded string that can be decoded into username and password with “AWS” as username and temporary token as password.

It’s important to note that when executing docker login commands, the command string can be visible by other users on the system in a process list, e.g., ps –e, meaning other users can view authentication credentials to gain push and pull access to repositories. To avoid this, you can interactively log in by omitting the –p password option and enter password only when prompted. Overall, this may add additional overhead in a continuous development environment where developers need to worry about re-authentication every few hours.

Amazon ECR Docker Credential Helper
This is where Amazon ECR Docker Credential Helper makes it easy for developers to use ECR without the need to use docker login or write logic to refresh tokens and provide transparent access to ECR repositories.

Credential Helper helps developers in a continuous development environment to automate the authentication process to ECR repositories without having to regenerate tokens every 12 hours. In addition, Credential Helper also provides token caching under the hood so you don’t have to worry about getting throttled or writing additional logic.

You can access Credential Helper in the amazon-ecr-credential-helper GitHub repository.

Using Credential Helper on Linux/Mac and Windows
The prerequisites include:

  • Docker 1.11 or above installed on your system
  • AWS credentials available in one of the standard locations:
    • ~/.aws/credentials file
    • AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables
    • IAM role for Amazon EC2

First, build a binary for your client machine. Although you can do it with your own Go environment, we also provide a way to build it inside a Docker container without installing Go by yourself. To build by container, just type make docker on the root directory of the repository. It will run a container FROM go image and build the binary on the mounted volume. After that, you can see it at /bin/local/docker-credential-ecr-login.

Note: You need to run this with the local Docker engine as the remote Docker Engine can’t mount your local volume.

You can also build the binary cross compiled:

  • To build a Mac binary, use make docker TARGET_GOOS=darwin
  • To build a Windows binary, use make docker TARGET_GOOS=windows

With these commands, Go builds the binary for the target OS inside the Linux container.

The last thing you need to do is create a Docker configuration file for the helper. Put the file under ~/.docker/config.json or C:\Users\bob\.docker\config.json with the following content:


{
    "credsStore": "ecr-login"
}

Now, you can use the docker command to interact with ECR without docker login. When you type docker push/pull YOUR_ECR_IMAGE_ID, Credential Helper is called and communicates with the ECR endpoint to get the Docker credentials. Because it automatically detects the proper region from the image ID, you don’t have to worry about it.

Using Credential Helper with Jenkins
One of the common customer deployment patterns with ECS and ECR is integrating with existing CI/CD tools like Jenkins. Using Credential Helper, your Docker CI/CD setup with Jenkins is much simpler and more reliable.
To set up ECR as a Docker image repository for Jenkins and configure Credential Helper:

  • Ensure that your Jenkins instance has the proper AWS credentials to pull/push with your ECR repository. These can be in the form of environment variables, a shared credential file, or an instance profile.
  • Place docker-credential-ecr-login binary at one of directories in $PATH.
  • Write the Docker configuration file under the home directory of the Jenkins user, for example, /var/lib/jenkins/.docker/config.json.
  • Install the Docker Build and Publish plugin and make sure that the jenkins user can contact the Docker daemon.

Then, create a project with a build step, as in the following screenshot:

Jenkins Build Step

Now Jenkins can push/pull images to the ECR registry without needing to refresh tokens, just like your previous Docker CLI experience.

Conclusion
The Amazon ECR Docker Credential Helper provides a very efficient way to access ECR repositories. It is transparent so that you no longer need to recall this helper after setup. This tool is hosted on GitHub and we welcome your feedback and pull requests.

If you have any questions or suggestions, please comment below.

AWS Week in Review – August 22, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-august-22-2016/

Here’s the first community-driven edition of the AWS Week in Review. In response to last week’s blog post (AWS Week in Review – Coming Back With Your Help!), 9 other contributors helped to make this post a reality. That’s a great start; let’s see if we can go for 20 this week.


Monday

August 22

Tuesday

August 23

Wednesday

August 24

Thursday

August 25

Friday

August 26

Sunday

August 28

New & Notable Open Source

New SlideShare Presentations

Upcoming Events

Help Wanted

Stay tuned for next week, and please consider helping to make this a community-driven effort!

Jeff;