Since our last System and Organization Control (SOC) audit, our service and compliance teams have been working to increase the number of AWS Services in scope prioritized based on customer requests. Today, we’re happy to report 11 services are newly SOC compliant, which is a 21 percent increase in the last six months.
Our latest SOC 1, 2, and 3 reports covering the period from October 1, 2017 to March 31, 2018 are now available. The SOC 1 and 2 reports are available on-demand through AWS Artifact by logging into the AWS Management Console. The SOC 3 report can be downloaded here.
Finally, prospective customers can read our SOC 1 and 2 reports by reaching out to AWS Compliance.
Want more AWS Security news? Follow us on Twitter.
AWS has updated its certifications against ISO 9001, ISO 27001, ISO 27017, and ISO 27018 standards, bringing the total to 67 services now under ISO compliance. We added the following 29 services this cycle:
AWS maintains certifications through extensive audits of its controls to ensure that information security risks that affect the confidentiality, integrity, and availability of company and customer information are appropriately managed.
You can download copies of the AWS ISO certificates that contain AWS’s in-scope services and Regions, and use these certificates to jump-start your own certification efforts:
The Paris Region will benefit from three AWS Direct Connect locations. Telehouse Voltaire is available today. AWS Direct Connect will also become available at Equinix Paris in early 2018, followed by Interxion Paris.
All AWS infrastructure regions around the world are designed, built, and regularly audited to meet the most rigorous compliance standards and to provide high levels of security for all AWS customers. These include ISO 27001, ISO 27017, ISO 27018, SOC 1 (Formerly SAS 70), SOC 2 and SOC 3 Security & Availability, PCI DSS Level 1, and many more. This means customers benefit from all the best practices of AWS policies, architecture, and operational processes built to satisfy the needs of even the most security sensitive customers.
AWS is certified under the EU-US Privacy Shield, and the AWS Data Processing Addendum (DPA) is GDPR-ready and available now to all AWS customers to help them prepare for May 25, 2018 when the GDPR becomes enforceable. The current AWS DPA, as well as the AWS GDPR DPA, allows customers to transfer personal data to countries outside the European Economic Area (EEA) in compliance with European Union (EU) data protection laws. AWS also adheres to the Cloud Infrastructure Service Providers in Europe (CISPE) Code of Conduct. The CISPE Code of Conduct helps customers ensure that AWS is using appropriate data protection standards to protect their data, consistent with the GDPR. In addition, AWS offers a wide range of services and features to help customers meet the requirements of the GDPR, including services for access controls, monitoring, logging, and encryption.
From Our Customers Many AWS customers are preparing to use this new Region. Here’s a small sample:
Societe Generale, one of the largest banks in France and the world, has accelerated their digital transformation while working with AWS. They developed SG Research, an application that makes reports from Societe Generale’s analysts available to corporate customers in order to improve the decision-making process for investments. The new AWS Region will reduce latency between applications running in the cloud and in their French data centers.
SNCF is the national railway company of France. Their mobile app, powered by AWS, delivers real-time traffic information to 14 million riders. Extreme weather, traffic events, holidays, and engineering works can cause usage to peak at hundreds of thousands of users per second. They are planning to use machine learning and big data to add predictive features to the app.
Radio France, the French public radio broadcaster, offers seven national networks, and uses AWS to accelerate its innovation and stay competitive.
Les Restos du Coeur, a French charity that provides assistance to the needy, delivering food packages and participating in their social and economic integration back into French society. Les Restos du Coeur is using AWS for its CRM system to track the assistance given to each of their beneficiaries and the impact this is having on their lives.
AlloResto by JustEat (a leader in the French FoodTech industry), is using AWS to to scale during traffic peaks and to accelerate their innovation process.
AWS Consulting and Technology Partners We are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in France. Here’s a partial list:
AWS in France We have been investing in Europe, with a focus on France, for the last 11 years. We have also been developing documentation and training programs to help our customers to improve their skills and to accelerate their journey to the AWS Cloud.
As part of our commitment to AWS customers in France, we plan to train more than 25,000 people in the coming years, helping them develop highly sought after cloud skills. They will have access to AWS training resources in France via AWS Academy, AWSome days, AWS Educate, and webinars, all delivered in French by AWS Technical Trainers and AWS Certified Trainers.
Use it Today The EU (Paris) Region is open for business now and you can start using it today!
At last year’s AWS re:Invent we launched AWS OpsWorks for Chef Automate which enabled customers to get their own Chef Automate server, managed by AWS. Building on customer feedback we’re excited to bring Puppet Enterprise to OpsWorks today.
Puppet Enterprise allows you to automate provisioning, configuring, and managing instances through a puppet-agent deployed on each managed node. You can define a configuration once and apply it to thousands of nodes with automatic rollback and drift detection. AWS OpsWorks for Puppet Enterprise eliminates the need to maintain your own Puppet masters while working seamlessly with your existing Puppet manifests.
OpsWorks for Puppet Enterprise will manage the Puppet master server for you and take care of operational tasks like installation, upgrades, and backups. It also simplifies node registration and offers a useful starter kit for bootstrapping your nodes. More details below.
Creating a Managed Puppet Master
Creating a Puppet master in OpsWorks is simple. First navigate to the OpsWorks console Puppet section and click “Create Puppet Enterprise Server”.
On this first part of the setup you configure the region and EC2 instance type for your Puppet master. A c4.large can support up to 450 nodes while a c4.2xlarge can support 1600+ nodes. Your Puppet Enterprise server will be provisioned with the newest version of Amazon Linux (2017.09) and the most current version of Puppet Enterprise (2017.3.2).
On the next screen of the setup you can optionally configure an SSH key to connect your Puppet master. This is useful if you’ll be making any major customizations but it’s a good general practice to interact with Puppet through the client tools rather than directly on the instance itself.
Also on this page, you can setup an r10k repo to pull dynamic configurations.
In the advanced settings page you can select the usual deployment options around VPCs, security groups, IAM roles, and instance profiles. If you choose to have OpsWorks create the instance security group for you, do note that it will be open by default so it’s important to restrict access to this later.
Two components to pay attention to on this page are the maintenance window and backup configurations. When new minor versions of Puppet software become available, system maintenance is designed to update the minor version of Puppet Enterprise on your Puppet master automatically, as soon as it passes AWS testing. AWS performs extensive testing to verify that Puppet upgrades are production-ready and will deploy without disrupting existing customer environments. Automated backups allow you to store durable backups of your Puppet master in S3 and to restore from those backups at anytime. You can adjust the backup frequency and retention based on your business needs.
Using AWS OpsWorks for Puppet Enterprise
While your Puppet master is provisioning there are two helpful information boxes provided in the console.
You can download your sign-in credentials as well as sample userdata for installing the puppet-agent onto your Windows and Linux nodes. An important note here is that you’re able to manage your on-premises nodes as well, provided they have connectivity to your Puppet master.
Once your Puppet master is fully provisioned you can access the Puppet Enterprise http console and use Puppet as you normally would.
AWS OpsWorks for Puppet Enterprise is priced in Node Hours for your managed nodes. Prices start at $0.017 per-node-hour and decrease with volume of nodes – you can see the full pricing page here. You’ll also pay for the underlying resources required to run your Puppet master. At launch AWS OpsWorks for Puppet Enterprise is available in US East (N. Virginia) Region, US West (Oregon) Region, and EU (Ireland) Region. Of course everything you’ve seen in the console can also be accomplished through the AWS SDKs and CLI. You can get more information in the Getting Started Guide.
The original AWS Price List API, as described in New – AWS Price List API, gave you access to prices in JSON and CSV form by way of structured URLs. While this worked well for some types of cost management tools, the size and complexity of the files made them difficult to download and tedious to parse. Today we are updating the API by adding new functions that allow you to perform fine-grained price queries that return only the prices that you need. This will allow you to make use of the prices in mobile and browser-based applications.
New Functions Here are the new functions:
DescribeServices – Returns sets of attribute keys that are used to define products within a service. For example, the keys returned for EC2 will include physicalProcessor, memory, operatingSystem, location, and tenancy.
GetAttributeValues – Returns all of the allowable values for a given attribute key. For example, values for the operatingSystem key include Windows, RHEL, Linux, and SUSE; values for the location key include US East (N. Virginia) and Asia Pacific (Mumbai).
GetProducts -Returns all of the products, along with their public prices, that match a filter expression based on service name and attribute value.
You can access these functions from the AWS SDKs. In order to try them out I used Python and the AWS SDK for Python. I start by importing the SDK and creating a client:
Here’s how I get all of the values for all of EC2’s pricing attributes:
print("Selected EC2 Attributes & Values")
response = pricing.describe_services(ServiceCode='AmazonEC2')
attrs = response['Services']['AttributeNames']
for attr in attrs:
response = pricing.get_attribute_values(ServiceCode='AmazonEC2', AttributeName=attr)
values = 
for attr_value in response['AttributeValues']:
print(" " + attr + ": " + ", ".join(values))
The output starts like this:
Selected EC2 Attributes & Values
volumeType: Throughput Optimized HDD, Provisioned IOPS, Magnetic, General Purpose, Cold HDD
maxIopsvolume: 500 - based on 1 MiB I/O size, 40 - 200, 250 - based on 1 MiB I/O size, 20000, 10000
locationType: AWS Region
instanceFamily: Storage optimized, Micro instances, Memory optimized, GPU instance, General purpose, Compute optimized
operatingSystem: Windows, SUSE, RHEL, NA, Linux
And here’s how I use the service name and attribute values to obtain price listings for EC2 instances with 64 vCPUs, 256 GiB of memory, pre-installed SQL Server Enterprise, in the Asia Pacific (Mumbai) Region. Each price is a JSON string:
Now Available The new functions are available now and you can start using them today in the US East (Northern Virginia) and Asia Pacific (Mumbai) Regions to access metadata and price listings for all public AWS Regions and AWS GovCloud (US), at no charge.
This is a guest post from Paul Duvall, CTO of Stelligent, a division of HOSTING.
I co-founded Stelligent, a technology services company that provides DevOps Automation on AWS as a result of my own frustration in implementing all the “behind the scenes” infrastructure (including builds, tests, deployments, etc.) on software projects on which I was developing software. At Stelligent, we have worked with numerous customers looking to get software delivered to users quicker and with greater confidence. This sounds simple but it often consists of properly configuring and integrating myriad tools including, but not limited to, version control, build, static analysis, testing, security, deployment, and software release orchestration. What some might not realize is that there’s a new breed of build, deploy, test, and release tools that help reduce much of the undifferentiated heavy lifting of deploying and releasing software to users.
I’ve been using AWS since 2009 and I, along with many at Stelligent – have worked with the AWS Service Teams as part of the AWS Developer Tools betas that are now generally available (including AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy). I’ve combined the experience we’ve had with customers along with this specialized knowledge of the AWS Developer and Management Tools to provide a unique course that shows multiple ways to use these services to deliver software to users quicker and with confidence.
In DevOps Essentials on AWS, you’ll learn how to accelerate software delivery and speed up feedback loops by learning how to use AWS Developer Tools to automate infrastructure and deployment pipelines for applications running on AWS. The course demonstrates solutions for various DevOps use cases for Amazon EC2, AWS OpsWorks, AWS Elastic Beanstalk, AWS Lambda (Serverless), Amazon ECS (Containers), while defining infrastructure as code and learning more about AWS Developer Tools including AWS CodeStar, AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy.
In this course, you see me use the AWS Developer and Management Tools to create comprehensive continuous delivery solutions for a sample application using many types of AWS service platforms. You can run the exact same sample and/or fork the GitHub repository (https://github.com/stelligent/devops-essentials) and extend or modify the solutions. I’m excited to share how you can use AWS Developer Tools to create these solutions for your customers as well. There’s also an accompanying website for the course (http://www.devopsessentialsaws.com/) that I use in the video to walk through the course examples which link to resources located in GitHub or Amazon S3. In this course, you will learn how to:
Use AWS Developer and Management Tools to create a full-lifecycle software delivery solution
Use AWS CloudFormation to automate the provisioning of all AWS resources
Use AWS CodePipeline to orchestrate the deployments of all applications
Use AWS CodeCommit while deploying an application onto EC2 instances using AWS CodeBuild and AWS CodeDeploy
Deploy applications using AWS OpsWorks and AWS Elastic Beanstalk
Deploy an application using Amazon EC2 Container Service (ECS) along with AWS CloudFormation
Deploy serverless applications that use AWS Lambda and API Gateway
Integrate all AWS Developer Tools into an end-to-end solution with AWS CodeStar
To learn more, see DevOps Essentials on AWS video course on Udemy. For a limited time, you can enroll in this course for $40 and save 80%, a $160 saving. Simply use the code AWSDEV17.
Stelligent, an AWS Partner Network Advanced Consulting Partner holds the AWS DevOps Competency and over 100 AWS technical certifications. To stay updated on DevOps best practices, visit www.stelligent.com.
Spring has officially sprung. As you enjoy the blossoming of May flowers, it may be worthy to also note some of the great tech talks blossoming online during the month of May. This month’s AWS Online Tech Talks features sessions on topics like AI, DevOps, Data, and Serverless just to name a few.
May 2017 – Schedule
Below is the upcoming schedule for the live, online technical sessions scheduled for the month of May. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts. All schedule times for the online tech talks are shown in the Pacific Time (PDT) time zone.
The AWS Online Tech Talks series covers a broad range of topics at varying technical levels. These sessions feature live demonstrations & customer examples led by AWS engineers and Solution Architects. Check out the AWS YouTube channel for more on-demand webinars on AWS technologies.
The AWS Blog collection has grown over the past couple of years. As you can see from the list on the right, we now have blogs that cover a wide variety of topics and development tools. We also have blogs that are designed for those of you who read languages other than English!
The AWS Management Tools Blog is the newest member of the collection. This blog focuses on AWS tools that help you to provision, configure, monitor, track, audit, and manage the costs of your AWS and on-premises resources at scale. Topics planned for the blog include deep technical coverage of feature updates, tips and tricks, sample apps, CloudFormation templates, and an on-going discussion of use cases. Here are some of the initial posts:
AWS OpsWorks Stacks now supports Amazon CloudWatch Logs. This benefits all users who want to stream their log files from OpsWorks instances to CloudWatch. This enables you to take advantage of CloudWatch Logs features such as centralized log archival, real-time monitoring of log data, or generating CloudWatch alarms. Until now, OpsWorks customers had to manually install and configure the CloudWatch Logs agent on every instance they wanted to ship log data from. Now, with the built-in CloudWatch Logs integration these features are made available with just a few clicks across all instances in a layer.
The OpsWorks CloudWatch logs integration is compatible with all Chef 11 and Chef 12 Linux stacks. To get started, simply click the new CloudWatch Logs tab in your layer settings screen. OpsWorks agent command logs, which include Chef recipe output, are enabled by default; to enable logging for custom log files, simply input an absolute path or file pattern for every log file you want shipped.
It’s that simple. OpsWorks creates log groups and log streams for you, and starts streaming your system and application logs within a few minutes.
Here to tell you about using Automation for streamline AMI maintenance and patching is Taylor Anderson, a Senior Product Manager with EC2.
Last December at re:Invent, we launched Amazon EC2 Systems Manager, which helps you automatically collect software inventory, apply OS patches, create system images, and configure Windows and Linux operating systems. These capabilities enable automated configuration and ongoing management of systems at scale, and help maintain software compliance for instances running in Amazon EC2 or on-premises.
One feature within Systems Manager is Automation, which can be used to patch, update agents, or bake applications into an Amazon Machine Image (AMI). With Automation, you can avoid the time and effort associated with manual image updates, and instead build AMIs through a streamlined, repeatable, and auditable process.
Recently, we released the first public document for Automation: AWS-UpdateLinuxAmi. This document allows you to automate patching of Ubuntu, CentOS, RHEL, and Amazon Linux AMIs, as well as automating the installation of additional site-specific packages and configurations.
More importantly, it makes it easy to get started with Automation, eliminating the need to first write an Automation document. AWS-UpdateLinuxAmi can also be used as a template when building your own Automation workflow. Windows users can expect the equivalent document―AWS-UpdateWindowsAmi―in the coming weeks.
AWS-UpdateLinuxAmi automates the following workflow:
Launch a temporary EC2 instance from a source Linux AMI.
Update the instance.
Invoke a user-provided, pre-update hook script on the instance (optional).
Update any AWS tools and agents on the instance, if present.
Update the instance’s distribution packages using the native package manager.
Invoke a user-provided post-update hook script on the instance (optional).
Stop the temporary instance.
Create a new AMI from the stopped instance.
Terminate the instance.
Warning: Creation of an AMI from a running instance carries a risk that credentials, secrets, or other confidential information from that instance may be recorded to the new image. Use caution when managing AMIs created by this process.
Configuring roles and permissions for Automation
If you haven’t used Automation before, you need to configure IAM roles and permissions. This includes creating a service role for Automation, assigning a passrole to authorize a user to provide the service role, and creating an instance role to enable instance management under Systems Manager. For more details, see Configuring Access to Automation.
In the EC2 console, choose Systems Manager Services,Automations.
Choose Run automation document
Expand Document name and choose AWS-UpdateLinuxAmi.
Choose the latest document version.
For SourceAmiId, enter the ID of the Linux AMI to update.
For InstanceIamRole, enter the name of the instance role you created enabling Systems Manager to manage an instance (that is, it includes the AmazonEC2RoleforSSM managed policy). For more details, see Configuring Access to Automation.
Monitor progress in the Automation Steps tab, and view step-level outputs.
After execution is complete, choose Description to view any outputs returned by the workflow. In this example, AWS-UpdateLinuxAmi returns the new AMI ID.
Next, choose Images, AMIs to view your new AMI.
There is no additional charge to use the Automation service, and any resources created by a workflow incur nominal charges. Note that if you terminate AWS-UpdateLinuxAmi before reaching the “Terminate Instance” step, shut down the temporary instance created by the workflow.
Now that you’ve successfully run AWS-UpdateLinuxAmi, you may want to create default values for the service and instance roles. You can customize your workflow by creating your own Automation document based on AWS-UpdateLinuxAmi. For more details, see Create an Automaton Document. After you’ve created your document, you can write additional steps and add them to the workflow.
Example steps include:
Updating an Auto Scaling group with the new AMI ID (aws:invokeLambdaFunction action type)
Creating an encrypted copy of your new AMI (aws:encrypedCopy action type)
Validating your new AMI using Run Command with the RunPowerShellScript document (aws:runCommand action type)
Automation also makes a great addition to a CI/CD pipeline for application bake-in, and can be invoked as a CLI build step in Jenkins. For details on these examples, be sure to check out the Automation technical documentation. For updates on Automation, Amazon EC2 Systems Manager, Amazon CloudFormation, AWS Config, AWS OpsWorks and other management services, be sure to follow the all-new Management Tools blog.
Thanks to my colleague John Pignata for a great blog on how to create a continuous deployment pipeline to Amazon ECS. —
Delivering new iterations of software at a high velocity is a competitive advantage in today’s business environment. The speed at which organizations can deliver innovations to customers and adapt to changing markets is increasingly a pivotal attribute that can make the difference between success and failure.
AWS provides a set of flexible services designed to enable organizations to embrace the combination of cultural philosophies, practices, and tools called DevOps that increases an organization’s ability to deliver applications and services at high velocity.
Agility is often cited as a key advantage of cloud computing over the traditional delivery of IT resources. Instead of waiting weeks or months for other departments to provision a new server, developers can create new instances with a click or API call and start using it within minutes. This newfound speed and autonomy frees developers to experiment and deliver new products and features to their customers as quickly as possible.
On top of the cloud, teams are embracing DevOps practices in order to achieve a faster time-to-market, better code quality, and more reliable releases of their products and services. Continuous deployment is a DevOps practice in which new software revisions are automatically built, tested, packaged, and released to production.
Continuous deployment enables developers to ship features and fixes through an entirely automated software release process. Instead of batching up large releases over a period of weeks or months and conducting deployments manually, developers can use automation to deliver versions of their applications many times a day as new software revisions are ready for users. In the same way cloud computing abbreviates the delivery time of resources, continuous deployment reduces the release cycle of new software to your users from weeks or months to minutes.
Embracing this speed and agility has many benefits including:
New features and bug fixes are released to users quickly; code sitting in a source code repository does not deliver business value or benefit your customers. By releasing new software revisions as close to immediately as possible, customers start benefiting from your work more quickly and teams can get more focused feedback.
Change sets are smaller; large change sets create challenges in pinpointing root causes of issues, bugs, and other regressions. By releasing smaller change sets more frequently, teams can more easily attribute and correct introduced issues.
Automated deployment encourages best practices; as any change committed to your source code repository can be deployed immediately via automation, teams have to ensure that changes are well-tested and that their production environments are closely monitored.
How does continuous deployment work?
Continuous deployment is conducted by an automated pipeline that coordinates the activities related to software release and provides visibility into the process. During the process, a releasable artifact is built, tested, packaged, and deployed into a production environment. The releasable artifact might be an executable file, a package of script files, a container, or some other component that ultimately must be delivered to production.
AWS CodePipeline is a continuous delivery and deployment service that coordinates the building, testing, and deployment of your code each time there is a new software revision. CodePipeline provides visible, central orchestration for taking a code change and moving it through a workflow and ultimately into the hands of your users. The pipeline defines stages to retrieve code from a source code repository, build the source code into a releasable artifact, test that artifact, and deliver it to production while ensuring that these stages happen in order and are halted if a failure occurs.
While CodePipeline powers the delivery pipeline and orchestrates the process, it does not have facilities for building or testing the software itself. For these stages, CodePipeline integrates with several other tools, including AWS CodeBuild, which is a fully managed build service. CodeBuild compiles source code, runs tests, and produces software packages that are ready to deploy. That makes it ideal for the build and test stages of a continuous deployment pipeline. Out of the box, CodeBuild has native support for many different kinds of build environments, including building Docker containers.
Containers are a powerful mechanism for software delivery, as they allow for a predictable and reproducible environment and provide a high level of confidence that changes tested in one environment can be successfully deployed. AWS provides several services to run and manage Docker container images. Amazon ECS is a highly scalable and high performance container management service that allows you to run applications on a cluster of Amazon EC2 instances. Amazon ECR is a fully managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.
Finally, CodePipeline integrates with several services to facilitate deployment, including AWS Elastic Beanstalk, AWS CodeDeploy, AWS OpsWorks, and your own custom deployment code or process using AWS Lambda or AWS CloudFormation. These deployment actions can be used to power the final step in your pipeline to push the newly built changes live onto your production environment.
Continuous deployment to Amazon ECS
Here’s a reference architecture that puts these components together to deliver a continuous deployment pipeline of Docker applications onto ECS:
This architecture demonstrates how to deploy containers onto ECS and ECR using CodePipeline to build a fully automated continuous deployment pipeline on top of AWS. This approach to continuous deployment is entirely serverless and uses managed services for the orchestration, build, and deployment of your software.
The pipeline created in the reference architecture looks like the following:
In this post, I discuss each stage in this reference architecture. What happens when a developer changes some copy on a landing page and pushes that change into the source code repository?
First, in the Source stage, the pipeline is configured with details for accessing a source code repository system. In the reference architecture, you have a sample application hosted in a GitHub repository. CodePipeline polls this repository and initiates a new pipeline execution for each new commit. In addition to GitHub, CodePipeline also supports source locations such as a Git repository in AWS CodeCommit or a versioned object stored in Amazon S3. Each new build is retrieved from the source code repository, packaged as a zip file, stored on S3, and sent to the next stage of the pipeline.
The Source stage also defines a template artifact stored on Amazon S3. This is the template that defines the deployment environment used by the deployment stage after a successful build of the application.
The Build stage uses CodeBuild to create a new Docker container image based upon the latest source code and pushes it to an ECR repository. CodePipeline also integrates with a number of third-party build systems, such as Jenkins, CloudBees, Solano CI, and TeamCity.
Finally, the Deploy stage uses CloudFormation to create a new task definition revision that points to the newly built Docker container image and updates the ECS service to use the new task definition revision. After this is done, ECS initiates a deployment by fetching the new Docker container from ECR and restarting the service.
After all of the pipeline’s stages are green, you can reload the application in a web browser and see the developer’s copy changes live in production. This happened automatically without any human invention.
This pipeline is now in production, listening for new code in the source code repository, and ready to ship any future changes that your team pushes into production. It’s also extensible, meaning that new stages can be added to include additional steps. For example, you could include a test stage to execute unit and acceptance tests to ensure the new code revision is safe to deploy to production. After it’s deployed, a notification step could be added to alert your team via email or a Slack channel that a new version is live, along with the details about the change set deployed to production.
We’re excited to see what kinds of applications you can deliver to your users using this approach and how it affects your product development processes. The cloud unlocks massive advantages in agility, and the ability to implement techniques like continuous deployment unlocks a significant competitive advantage.
AWS OpsWorks helps you to configure and run applications using Chef. You use a Domain Specific Language (DSL) to write cookbooks that define your application’s architecture and the configuration of each component. The Chef server is an essential part of the configuration process. It stores all of the cookbooks and tracks state information for each of the instances (nodes in Chef terminology).
Because the Chef server is in the critical path when newly launched instances are configured, it must be reliable. Many OpsWorks and Chef users install and maintain this important architectural component themselves. In production-scale environments, this leaves them to handle backups, restores, version upgrades, and so forth.
You can use Chef Automate to manage your infrastructure throughout the life-cycle of your application’s infrastructure. For example, newly launched EC2 instances can automatically connect to the Chef server and run a specified recipe by using an unattended association script (read Adding Nodes Automatically in AWS OpsWorks for Chef Automate to learn more). The registration script can be used to register EC2 instances created dynamically through an Auto Scaling Group and to register on-premises servers.
Take a Look Let’s launch a Chef Automate server from the OpsWorks Console. Click on Go to OpsWorks for Chef Automate to get started.
Click on Create Chef Automate server, give your server a name, choose a region, and select a suitable EC2 instance type:
Choose one of your SSH key pairs, or opt out of SSH:
Finally, configure your network (VPC), IAM, maintenance window, and backup settings:
Click on Next, review your settings, and then click on Launch! The launch process takes less than 20 minutes. During that time you can download the sign-in credentials for your Chef Automate dashboard along with a Starter Kit:
You can see all of your Chef Automate servers at a glance:
Click on the server name (BorkBorkBork here), and then on Open Chef Automate dashboard, then enter your credentials to log in:
And here’s the dashboard:
You can see and manage your nodes:
Manage your workflows:
And much more!
Behind the scenes, the launch process invokes a AWS CloudFormation template. The template creates an EC2 instance, an Elastic IP Address, and a Security Group.
Available Now You can launch AWS OpsWorks for Chef Automate today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) Regions. Pricing is based on the number of nodes and the number of hours that they are connected to the server; see the Chef Automate Pricing page for more info. As part of the AWS Free Tier, you can use up to 10 nodes at no charge for 12 months.
AWS OpsWorks Stacks is a configuration management service that helps you configure and operate applications of all shapes and sizes using Chef. You can define the application’s architecture and the specification of each component, including package installation, software configuration, and more.
Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Because Spot instances are often available at a discount compared to On-Demand instances, you can significantly reduce the cost of running your applications, grow your applications’ compute capacity and throughput for the same budget, and enable new types of cloud computing applications.
You can use Spot instances with AWS OpsWorks Stacks in the following ways:
As a part of an Auto Scaling group, as described in this blog post. You can follow the steps in the blog post and choose to use Spot instance option, in the launch configuration described in step 5.
To provision a Spot instance in the EC2 console and have it automatically register with an OpsWorks stack as described here.
The example used in this post will require you to create the following resources:
IAM instance profile: an IAM profile that grants your instances permission to register themselves with OpsWorks.
Lambda function: a function that deregisters your instances from an OpsWorks stack.
Spot instance: the Spot instance that will run your application.
CloudWatch Event role: an event that will trigger the Lambda function whenever your Spot instance is terminated.
Step 1: Create an IAM instance profile
When a Spot instance starts, it must be able to make an API call to register itself with an OpsWorks stack. By assigning an instance with an IAM instance profile, the instance will be able to make calls to OpsWorks.
Open the IAM console at https://console.aws.amazon.com/iam/, choose Roles, and then choose Create New Role. Type a name for the role, and then choose Next Step. Choose the Amazon EC2 role, and then select the check box next to the AWSOpsWorksInstanceRegistration policy. Finally, select Next Step, and then choose Create Role. As its name suggests, the AWSOpsWorksInstanceRegistration policy will allow the instance to make API calls only to register an instance. Copy the following policy to the new role you’ve just created.
This Lambda function is responsible for deregistering an instance from your OpsWorks stack. It will be invoked whenever the Spot instance is terminated.
Open the AWS Lambda console at https://us-west-2.console.aws.amazon.com/lambda/home, and choose the option to create a Lambda function. If you are prompted to choose a blueprint, choose Skip. Type a name for the Lambda function, and from the Runtime drop-down list, select Python 2.7.
Next, paste the following code into the Lambda Function Code text box:
def lambda_handler(event, context):
ec2_instance_id = event['detail']['instance-id']
ec2 = boto3.client('ec2')
for tag in ec2.describe_instances(InstanceIds=[ec2_instance_id])['Reservations']['Instances']['Tags']:
if (tag['Key'] == 'opsworks_stack_id'):
opsworks_stack_id = tag['Value']
opsworks = boto3.client('opsworks', 'us-east-1')
for instance in opsworks.describe_instances(StackId=opsworks_stack_id)['Instances']:
if ('Ec2InstanceId' in instance):
if (instance['Ec2InstanceId'] == ec2_instance_id):
print("Deregistering OpsWorks instance " + instance['InstanceId'])
The result should look like this:
Step 3: Create a CloudWatch event
Whenever the Spot instance is terminated, we need to trigger the Lambda function from step 2 to deregister the instance from its associated stack.
Open the AWS CloudWatch console at https://console.aws.amazon.com/cloudwatch/home, choose Events, and then choose the Create rule button. Choose Amazon EC2 from the event selector. Select Specific state(s), and choose Terminated. Select the Lambda function you created earlier as the target. Finally, choose the Configure details button.
Step 4: Create a Spot instance
Open the EC2 console at https://console.aws.amazon.com/ec2sp/v1/spot/home, and choose the Request Spot Instances button. Use the latest release of Amazon Linux. On the details page, under IAM instance profile, choose the instance profile you created in step 1. Paste the following script into the User data field:
This script will automatically register your Spot instance on boot with a corresponding OpsWorks stack and layer. Be sure to fill in the following fields:
It will take a few minutes for the instance to be provisioned and come online. You’ll see fulfilled displayed in the Status column and active displayed in the State column. After the instance and request are both in an active state, the instance should be fully booted and registered with your OpsWorks stack/layer.
You can also view the instance and its online state in the OpsWorks console under Spot Instance.
You can manually terminate a Spot instance from the OpsWorks service console. Simply choose the stop button and the Spot instance will be terminated and removed from your stack. Unlike an On-Demand Instance in OpsWorks, when a Spot instance is stopped, it cannot be restarted.
In case your Spot instance is terminated through other means (for example, in the EC2 console), a CloudWatch event will trigger the Lambda function, which will automatically deregister the instance from your OpsWorks stack.
You can now use OpsWorks Stacks to define your application’s architecture and software configuration while leveraging the attractive pricing of Spot instances.
January Webinars In our continued quest to provide you with training and education resources, I am pleased to share the webinars that we have set up for January. These are free, but they do fill up and you should definitely register ahead of time. All times are PT and each webinar runs for one hour:
Last week we launched our 15th AWS Region and today we are launching our 16th. We have expanded the AWS footprint into the United Kingdom with a new Region in London, our third in Europe. AWS customers can use the new London Region to better serve end-users in the United Kingdom and can also use it to store data in the UK.
From Our Customers Many AWS customers are getting ready to use this new Region. Here’s a very small sample:
Trainline is Europe’s number one independent rail ticket retailer. Every day more than 100,000 people travel using tickets bought from Trainline. Here’s what Mark Holt (CTO of Trainline) shared with us:
We recently completed the migration of 100 percent of our eCommerce infrastructure to AWS and have seen awesome results: improved security, 60 percent less downtime, significant cost savings and incredible improvements in agility. From extensive testing, we know that 0.3s of latency is worth more than 8 million pounds and so, while AWS connectivity is already blazingly fast, we expect that serving our UK customers from UK datacenters should lead to significant top-line benefits.
Kainos Evolve Electronic Medical Records (EMR) automates the creation, capture and handling of medical case notes and operational documents and records, allowing healthcare providers to deliver better patient safety and quality of care for several leading NHS Foundation Trusts and market leading healthcare technology companies.
Travis Perkins, the largest supplier of building materials in the UK, is implementing the biggest systems and business change in its history including the migration of its datacenters to AWS.
Just Eat is the world’s leading marketplace for online food delivery. Using AWS, JustEat has been able to experiment faster and reduce the time to roll out new feature updates.
OakNorth, a new bank focused on lending between £1m-£20m to entrepreneurs and growth businesses, became the UK’s first cloud-based bank in May after several months of working with AWS to drive the development forward with the regulator.
Partners I’m happy to report that we are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in the United Kingdom. Here’s a partial list:
AWS Premier Consulting Partners – Accenture, Claranet, Cloudreach, CSC, Datapipe, KCOM, Rackspace, and Slalom.
AWS Consulting Partners – Attenda, Contino, Deloitte, KPMG, LayerV, Lemongrass, Perfect Image, and Version 1.
AWS Managed Service Partners – Claranet, Cloudreach, KCOM, and Rackspace.
AWS Direct Connect Partners – AT&T, BT, Hutchison Global Communications, Level 3, Redcentric, and Vodafone.
Here are a few examples of what our partners are working on:
KCOM is a professional services provider offering consultancy, architecture, project delivery and managed service capabilities to large UK-based enterprise businesses. The scalability and flexibility of AWS gives them a significant competitive advantage with their enterprise and public sector customers. The new Region will allow KCOM to build innovative solutions for their public sector clients while meeting local regulatory requirements.
Splunk is a member of the AWS Partner Network and a market leader in analyzing machine data to deliver operational intelligence for security, IT, and the business. They use cloud computing and big data analytics to help their customers to embrace digital transformation and continuous innovation. The new Region will provide even more companies with real-time visibility into the operation of their systems and infrastructure.
Redcentric is a NHS Digital-approved N3 Commercial Aggregator. Their work allows health and care providers such as NHS acute, emergency and mental trusts, clinical commissioning groups (CCGs), and the ISV community to connect securely to AWS. The London Region will allow health and care providers to deliver new digital services and to improve outcomes for citizens and patients.
Compliance & Connectivity Every AWS Region is designed and built to meet rigorous compliance standards including ISO 27001, ISO 9001, ISO 27017, ISO 27018, SOC 1, SOC 2, SOC3, PCI DSS Level 1, and many more. Our Cloud Compliance page includes information about these standards, along with those that are specific to the UK, including Cyber Essentials Plus.
The UK Government recognizes that local datacenters from hyper scale public cloud providers can deliver secure solutions for OFFICIAL workloads. In order to meet the special security needs of public sector organizations in the UK with respect to OFFICIAL workloads, we have worked with our Direct Connect Partners to make sure that obligations for connectivity to the Public Services Network (PSN) and N3 can be met.
Use it Today The London Region is open for business now and you can start using it today! If you need additional information about this Region, please feel free to contact our UK team at [email protected].
Deploy an App to an AWS OpsWorks Layer Using AWS CodePipeline
AWS CodePipeline lets you create continuous delivery pipelines that automatically track code changes from sources such as AWS CodeCommit, Amazon S3, or GitHub. Now, you can use AWS CodePipeline as a code change-management solution for apps, Chef cookbooks, and recipes that you want to deploy with AWS OpsWorks.
This blog post demonstrates how you can create an automated pipeline for a simple Node.js app by using AWS CodePipeline and AWS OpsWorks. After you configure your pipeline, every time you update your Node.js app, AWS CodePipeline passes the updated version to AWS OpsWorks. AWS OpsWorks then deploys the updated app to your fleet of instances, leaving you to focus on improving your application. AWS makes sure that the latest version of your app is deployed.
Step 1: Upload app code to an Amazon S3 bucket
The Amazon S3 bucket must be in the same region in which you later create your pipeline in AWS CodePipeline. For now, AWS CodePipeline supports the AWS OpsWorks provider in the us-east-1 region only; all resources in this blog post should be created in the US East (N. Virginia) region. The bucket must also be versioned, because AWS CodePipeline requires a versioned source. For more information, see Using Versioning.
Choose the bucket that you created and upload the ZIP file that you saved in step 1.
In the Properties pane for the uploaded ZIP file, make a note of the S3 link to the file. You will need the bucket name and the ZIP file name portion of this link to create your pipeline.
Step 2: Create an AWS OpsWorks to Amazon EC2 service role
1. Go to the Identity and Access Management (IAM) service console, and choose Roles. 2. Choose Create Role, and name it aws-opsworks-ec2-role-with-s3. 3. In the AWS Service Roles section, choose Amazon EC2, and then choose the policy called AmazonS3ReadOnlyAccess. 4. The new role should appear in the Roles dashboard.
Step 3: Create an AWS OpsWorks Chef 12 Linux stack
To use AWS OpsWorks as a provider for a pipeline, you must first have an AWS OpsWorks stack, a layer, and at least one instance in the layer. As a reminder, the Amazon S3 bucket to which you uploaded your app must be in the same region in which you later create your AWS OpsWorks stack and pipeline, US East (N. Virginia).
1. In the OpsWorks console, choose Add Stack, and then choose a Chef 12 stack. 2. Set the stack’s name to CodePipeline Demo and make sure the Default operating system is set to Linux. 3. Enable Use custom Chef cookbooks. 4. For Repository type, choose HTTP Archive, and then use the following cookbook repository on S3: https://s3.amazonaws.com/opsworks-codepipeline-demo/opsworks-nodejs-demo-cookbook.zip. This repository contains a set of Chef cookbooks that include Chef recipes you’ll use to install the Node.js package and its dependencies on your instance. You will use these Chef recipes to deploy the Node.js app that you prepared in step 1.1.
Step 4: Create and configure an AWS OpsWorks layer
Now that you’ve created an AWS OpsWorks stack called CodePipeline Demo, you can create an OpsWorks layer.
1. Choose Layers, and then choose Add Layer in the AWS OpsWorks stack view. 2. Name the layer Node.js App Server. For Short Name, type app1, and then choose Add Layer. 3. After you create the layer, open the layer’s Recipes tab. In the Deploy lifecycle event, type nodejs_demo. Later, you will link this to a Chef recipe that is part of the Chef cookbook you referenced when you created the stack in step 3.4. This Chef recipe runs every time a new version of your application is deployed.
4. Now, open the Security tab, choose Edit, and choose AWS-OpsWorks-WebApp from the Security groups drop-down list. You will also need to set the EC2 Instance Profile to use the service role you created in step 2.2 (aws-opsworks-ec2-role-with-s3).
Step 5: Add your App to AWS OpsWorks
Now that your layer is configured, add the Node.js demo app to your AWS OpsWorks stack. When you create the pipeline, you’ll be required to reference this demo Node.js app.
Have the Amazon S3 bucket link from the step 1.4 ready. You will need the link to the bucket in which you stored your test app.
In AWS OpsWorks, open the stack you created (CodePipeline Demo), and in the navigation pane, choose Apps.
Choose Add App.
Provide a name for your demo app (for example, Node.js Demo App), and set the Repository type to an S3 Archive. Paste your S3 bucket link (s3://bucket-name/file name) from step 1.4.
Now that your app appears in the list on the Apps page, add an instance to your OpsWorks layer.
Step 6: Add an instance to your AWS OpsWorks layer
Before you create a pipeline in AWS CodePipeline, set up at least one instance within the layer you defined in step 4.
Open the stack that you created (CodePipeline Demo), and in the navigation pane, choose Instances.
Choose +Instance, and accept the default settings, including the hostname, size, and subnet. Choose Add Instance.
By default, the instance is in a stopped state. Choose start to start the instance.
Step 7: Create a pipeline in AWS CodePipeline
Now that you have a stack and an app configured in AWS OpsWorks, create a pipeline with AWS OpsWorks as the provider to deploy your app to your specified layer. If you update your app or your Chef deployment recipes, the pipeline runs again automatically, triggering the deployment recipe to run and deploy your updated app.
This procedure creates a simple pipeline that includes only one Source and one Deploy stage. However, you can create more complex pipelines that use AWS OpsWorks as a provider.
To create a pipeline
Open the AWS CodePipeline console in the U.S. East (N. Virginia) region.
Choose Create pipeline.
On the Getting started with AWS CodePipeline page, type MyOpsWorksPipeline, or a pipeline name of your choice, and then choose Next step.
On the Source Location page, choose Amazon S3 from the Source provider drop-down list.
In the Amazon S3 details area, type the Amazon S3 bucket path to your application, in the format s3://bucket-name/file name. Refer to the link you noted in step 1.4. Choose Next step.
On the Build page, choose No Build from the drop-down list, and then choose Next step.
On the Deploy page, choose AWS OpsWorks as the deployment provider.
Specify the names of the stack, layer, and app that you created earlier, then choose Next step.
On the AWS Service Role page, choose Create Role. On the IAM console page that opens, you will see the role that will be created for you (AWS-CodePipeline-Service). From the Policy Name drop-down list, choose Create new policy. Be sure the policy document has the following content, and then choose Allow. For more information about the service role and its policy statement, see Attach or Edit a Policy for an IAM Service Role.
On the Review your pipeline page, confirm the choices shown on the page, and then choose Create pipeline.
The pipeline should now start deploying your app to your OpsWorks layer on its own. Wait for deployment to finish; you’ll know it’s finished when Succeeded is displayed in both the Source and Deploy stages.
Step 8: Verifying the app deployment
To verify that AWS CodePipeline deployed the Node.js app to your layer, sign in to the instance you created in step 4. You should be able to see and use the Node.js web app.
On the AWS OpsWorks dashboard, choose the stack and the layer to which you just deployed your app.
In the navigation pane, choose Instances, and then choose the public IP address of your instance to view the web app. The running app will be displayed in a new browser tab.
To test the app, on the app’s web page, in the Leave a comment text box, type a comment, and then choose Send. The app adds your comment to the web page. You can add more comments to the page, if you like.
You now have a working and fully automated pipeline. As soon as you make changes to your application’s code and update the S3 bucket with the new version of your app, AWS CodePipeline automatically collects the artifact and uses AWS OpsWorks to deploy it to your instance, by running the OpsWorks deployment Chef recipe that you defined on your layer. The deployment recipe starts all of the operations on your instance that are required to support a new version of your artifact.
To ensure that your application operates in a predictable manner in both your test and production environments, you must vigilantly maintain the configuration of your resources. By leveraging configuration management solutions, Dev and Ops engineers can define the state of their resources across their entire lifecycle. In this session, we will show you how to use AWS OpsWorks, AWS CodeDeploy, and AWS CodePipeline to build a reliable and consistent development pipeline that assures your production workloads behave in a predictable manner.
Human Longevity, Inc. (HLI) is at the forefront of genomics research and wants to build the world’s largest database of human genomes along with related phenotype and clinical data, all in support of preventive healthcare. In today’s guest post, Yaron Turpaz, Bryan Coon, and Ashley Van Zeeland, talk about how they are using AWS to store the massive amount of data that is being generated as part of this effort to revolutionize medicine.
When Human Longevity, Inc. launched in 2013, our founders recognized the challenges that lie ahead. A genome contains all the information needed to build and maintain an organism; in humans, a copy of the entire genome, which contains more than three billion DNA base pairs, is contained in all cells that have a nucleus. Our goal is to sequence one million genomes and deliver that information—along with integrated health records and disease-risk models—to researchers and physicians. They, in turn, can interpret the data to provide targeted, personalized health plans and identify the optimal treatment for cancer and other serious health risks far earlier than has been possible in the past. The intent is to transform medicine by fostering preventive healthcare and risk prevention in place of the traditional “sick care” model, when people wind up seeing their doctors only after symptoms manifest.
Our work in developing and applying large-scale computing and machine learning to genomics research entails the collection, analysis, and storage of immense amounts of data from DNA-sequencing technology provided by companies like Illumina. Raw data from a single genome consumes about 100 gigabytes; that number increases as we align the genomic information with annotation and phenotype sources and analyze it for health insights.
From the beginning, we knew our choice of compute and storage technology would have a direct impact on the success of the company. Using the cloud was clearly the best option. We’re experts in genomics, and don’t want to spend resources building and maintaining an IT infrastructure. We chose to go all in on AWS for the breadth of the platform, the critical scalability we need, and the expertise AWS has developed in big data. We also saw that the pace of innovation at AWS—and its deliberate strategy of keeping costs as low as possible for customers—would be critical in enabling our vision.
Leveraging the Range of AWS Services
Spectral karyotype analysis / Image courtesy of Human Longevity, Inc.
Today, we’re using a broad range of AWS services for all kinds of compute and storage tasks. For example, the HLI Knowledgebase leverages a distributed system infrastructure comprised of Amazon S3 storage and a large number of Amazon EC2 nodes. This helps us achieve resource isolation, scalability, speed of provisioning, and near real-time response time for our petabyte-scale database queries and dynamic cohort builder. The flexibility of AWS services makes it possible for our customized Amazon Machine Images and pre-built, BTRFS-partitioned Amazon EBS volumes to achieve turn-up time in seconds instead of minutes. We use Amazon EMR for executing Spark queries against our data lake at the scale we need. AWS Lambda is a fantastic tool for hooking into Amazon S3 events and communicating with apps, allowing us to simply drop in code with the business logic already taken care of. We use Auto Scaling based on demand, and AWS OpsWorks for managing a Docker pipeline.
We also leverage the cost controls provided by Amazon EC2 Spot and Reserved Instance types. When we first started, we used on-demand instances, but the costs started to grow significantly. With Spot and Reserved Instances, we can allocate compute resources based on specific needs and workflows. The flexibility of AWS services enables us to make extensive use of dockerized containers through the resource-management services provided by Apache Mesos. Hundreds of dynamic Amazon EC2 nodes in both our persistent and spot abstraction layers are dynamically adjusted to scale up or down based on usage demand and the latest AWS pricing information. We achieve substantial savings by sharing this dynamically scaled compute cluster with our Knowledgebase service and the internal genomic and oncology computation pipelines. This flexibility gives us the compute power we need while keeping costs down. We estimate these choices have helped us reduce our compute costs by up to 50 percent from the on-demand model.
We’ve also worked with AWS Professional Services to address a particularly hard data-storage challenge. We have genomics data in hundreds of Amazon S3 buckets, many of them in the petabyte range and containing billions of objects. Within these collections are millions of objects that are unused, or used once or twice and never to be used again. It can be overwhelming to sift through these billions of objects in search of one in particular. It presents an additional challenge when trying to identify what files or file types are candidates for the Amazon S3-Infrequent Access storage class. Professional Services helped us with a solution for indexing Amazon S3 objects that saves us time and money.
Moving Faster at Lower Cost Our decision to use AWS came at the right time, occurring at the inflection point of two significant technologies: gene sequencing and cloud computing. Not long ago, it took a full year and cost about $100 million to sequence a single genome. Today we can sequence a genome in about three days for a few thousand dollars. This dramatic improvement in speed and lower cost, along with rapidly advancing visualization and analytics tools, allows us to collect and analyze vast amounts of data in close to real time. Users can take that data and test a hypothesis on a disease in a matter of days or hours, compared to months or years. That ultimately benefits patients.
Our business includes HLI Health Nucleus, a genomics-powered clinical research program that uses whole-genome sequence analysis, advanced clinical imaging, machine learning, and curated personal health information to deliver the most complete picture of individual health. We believe this will dramatically enhance the practice of medicine as physicians identify, treat, and prevent diseases, allowing their patients to live longer, healthier lives.
Learn More Learn more about how AWS supports genomics in the cloud, and see how genomics innovator Illumina uses AWS for accelerated, cost-effective gene sequencing.
AWS CodeCommit is a fully-managed source code control service. You can use it to host secure and highly scalable private Git repositories while continuing to use your existing Git tools and workflows (watch the Introduction to AWS CodeCommit video to learn more).
AWS CodePipeline is a continuous delivery service that you can use to streamline and automate your release process. Checkins to your repo (CodeCommit or Git) will initiate build, test, and deployment actions (watch Introducing AWS CodePipeline for an introduction). The build can be deployed to your EC2 instances or on-premises servers via CodeDeploy, AWS Elastic Beanstalk, or AWS OpsWorks.
You can combine these services with your existing build and testing tools to create an end-to-end software release pipeline, all orchestrated by CodePipeline.
We have made a lot of enhancements to the Code* products this year and today seems like a good time to recap all of them for you! Many of these enhancements allow you to connect the developer tools to other parts of AWS so that you can continue to fine-tune your development process.
CodeCommit Enhancements Here’s what’s new with CodeCommit:
CloudWatch Alarms and Automatic Deployment Rollback – CloudWatch Alarms give you another type of Monitoring for your Deployments. You can monitor metrics for the instances or Auto Scaling Groups managed by CodeDeploy and take action if they cross a threshold for a defined period of time, stop a deployment, or change the state of an instance by rebooting, terminating, or recovering it. You can also automatically rollback a deployment in response to a deployment failure or a CloudWatch Alarm.
You can also configure CodePipeline to use OpsWorks to deploy your code using recipes contained in custom Chef cookbooks.
Triggering of Lambda Functions – You can now Trigger a Lambda Function as one of the actions in a stage of your software release pipeline. Because Lambda allows you to write functions to perform almost any task, you can customize the way your pipeline works:
Manual Approval Actions – You can now add Manual Approval Actions to your software release pipeline. Execution pauses until the code change is approved or rejected by someone with the required IAM permission:
Thanks for Reading! I hope that you have enjoyed this quick look at some of the most recent additions to our development tools.
In order to help you to get some hands-on experience with continuous delivery, my colleagues have created a new Pipeline Starter Kit. The kit includes a AWS CloudFormation template that will create a VPC with two EC2 instances inside, a pair of applications (one for each EC2 instance, both deployed via CodeDeploy), and a pipeline that builds and then deploys the sample application, along with all of the necessary IAM service and instance roles.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.