All posts by Daniel Huesch

Deploy an App to an AWS OpsWorks Layer Using AWS CodePipeline

Post Syndicated from Daniel Huesch original https://aws.amazon.com/blogs/devops/deploy-an-app-to-an-aws-opsworks-layer-using-aws-codepipeline/

Deploy an App to an AWS OpsWorks Layer Using AWS CodePipeline

AWS CodePipeline lets you create continuous delivery pipelines that automatically track code changes from sources such as AWS CodeCommit, Amazon S3, or GitHub. Now, you can use AWS CodePipeline as a code change-management solution for apps, Chef cookbooks, and recipes that you want to deploy with AWS OpsWorks.

This blog post demonstrates how you can create an automated pipeline for a simple Node.js app by using AWS CodePipeline and AWS OpsWorks. After you configure your pipeline, every time you update your Node.js app, AWS CodePipeline passes the updated version to AWS OpsWorks. AWS OpsWorks then deploys the updated app to your fleet of instances, leaving you to focus on improving your application. AWS makes sure that the latest version of your app is deployed.

Step 1: Upload app code to an Amazon S3 bucket

The Amazon S3 bucket must be in the same region in which you later create your pipeline in AWS CodePipeline. For now, AWS CodePipeline supports the AWS OpsWorks provider in the us-east-1 region only; all resources in this blog post should be created in the US East (N. Virginia) region. The bucket must also be versioned, because AWS CodePipeline requires a versioned source. For more information, see Using Versioning.

Upload your app to an Amazon S3 bucket

  1. Download a ZIP file of the AWS OpsWorks sample, Node.js app, and save it to a convenient location on your local computer: https://s3.amazonaws.com/opsworks-codepipeline-demo/opsworks-nodejs-demo-app.zip.
  2. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. Choose Create Bucket. Be sure to enable versioning.
  3. Choose the bucket that you created and upload the ZIP file that you saved in step 1.code-pipeline-01
  4. In the Properties pane for the uploaded ZIP file, make a note of the S3 link to the file. You will need the bucket name and the ZIP file name portion of this link to create your pipeline.

Step 2: Create an AWS OpsWorks to Amazon EC2 service role

1.     Go to the Identity and Access Management (IAM) service console, and choose Roles.
2.     Choose Create Role, and name it aws-opsworks-ec2-role-with-s3.
3.     In the AWS Service Roles section, choose Amazon EC2, and then choose the policy called AmazonS3ReadOnlyAccess.
4.     The new role should appear in the Roles dashboard.

code-pipeline-02

Step 3: Create an AWS OpsWorks Chef 12 Linux stack

To use AWS OpsWorks as a provider for a pipeline, you must first have an AWS OpsWorks stack, a layer, and at least one instance in the layer. As a reminder, the Amazon S3 bucket to which you uploaded your app must be in the same region in which you later create your AWS OpsWorks stack and pipeline, US East (N. Virginia).

1.     In the OpsWorks console, choose Add Stack, and then choose a Chef 12 stack.
2.     Set the stack’s name to CodePipeline Demo and make sure the Default operating system is set to Linux.
3.     Enable Use custom Chef cookbooks.
4.     For Repository type, choose HTTP Archive, and then use the following cookbook repository on S3: https://s3.amazonaws.com/opsworks-codepipeline-demo/opsworks-nodejs-demo-cookbook.zip. This repository contains a set of Chef cookbooks that include Chef recipes you’ll use to install the Node.js package and its dependencies on your instance. You will use these Chef recipes to deploy the Node.js app that you prepared in step 1.1.

Step 4: Create and configure an AWS OpsWorks layer

Now that you’ve created an AWS OpsWorks stack called CodePipeline Demo, you can create an OpsWorks layer.

1.     Choose Layers, and then choose Add Layer in the AWS OpsWorks stack view.
2.     Name the layer Node.js App Server. For Short Name, type app1, and then choose Add Layer.
3.     After you create the layer, open the layer’s Recipes tab. In the Deploy lifecycle event, type nodejs_demo. Later, you will link this to a Chef recipe that is part of the Chef cookbook you referenced when you created the stack in step 3.4. This Chef recipe runs every time a new version of your application is deployed.

code-pipeline-03

4.     Now, open the Security tab, choose Edit, and choose AWS-OpsWorks-WebApp from the Security groups drop-down list. You will also need to set the EC2 Instance Profile to use the service role you created in step 2.2 (aws-opsworks-ec2-role-with-s3).

code-pipeline-04

Step 5: Add your App to AWS OpsWorks

Now that your layer is configured, add the Node.js demo app to your AWS OpsWorks stack. When you create the pipeline, you’ll be required to reference this demo Node.js app.

  1. Have the Amazon S3 bucket link from the step 1.4 ready. You will need the link to the bucket in which you stored your test app.
  2. In AWS OpsWorks, open the stack you created (CodePipeline Demo), and in the navigation pane, choose Apps.
  3. Choose Add App.
  4. Provide a name for your demo app (for example, Node.js Demo App), and set the Repository type to an S3 Archive. Paste your S3 bucket link (s3://bucket-name/file name) from step 1.4.
  5. Now that your app appears in the list on the Apps page, add an instance to your OpsWorks layer.

Step 6: Add an instance to your AWS OpsWorks layer

Before you create a pipeline in AWS CodePipeline, set up at least one instance within the layer you defined in step 4.

  1. Open the stack that you created (CodePipeline Demo), and in the navigation pane, choose Instances.
  2. Choose +Instance, and accept the default settings, including the hostname, size, and subnet. Choose Add Instance.

code-pipeline-05

  1. By default, the instance is in a stopped state. Choose start to start the instance.

Step 7: Create a pipeline in AWS CodePipeline

Now that you have a stack and an app configured in AWS OpsWorks, create a pipeline with AWS OpsWorks as the provider to deploy your app to your specified layer. If you update your app or your Chef deployment recipes, the pipeline runs again automatically, triggering the deployment recipe to run and deploy your updated app.

This procedure creates a simple pipeline that includes only one Source and one Deploy stage. However, you can create more complex pipelines that use AWS OpsWorks as a provider.

To create a pipeline

  1. Open the AWS CodePipeline console in the U.S. East (N. Virginia) region.
  2. Choose Create pipeline.
  3. On the Getting started with AWS CodePipeline page, type MyOpsWorksPipeline, or a pipeline name of your choice, and then choose Next step.
  4. On the Source Location page, choose Amazon S3 from the Source provider drop-down list.
  5. In the Amazon S3 details area, type the Amazon S3 bucket path to your application, in the format s3://bucket-name/file name. Refer to the link you noted in step 1.4. Choose Next step.
    code-pipeline-06
  6. On the Build page, choose No Build from the drop-down list, and then choose Next step.
  7. On the Deploy page, choose AWS OpsWorks as the deployment provider.code-pipeline-07-2
  8. Specify the names of the stack, layer, and app that you created earlier, then choose Next step.
  9. On the AWS Service Role page, choose Create Role. On the IAM console page that opens, you will see the role that will be created for you (AWS-CodePipeline-Service). From the Policy Name drop-down list, choose Create new policy. Be sure the policy document has the following content, and then choose Allow.
    For more information about the service role and its policy statement, see Attach or Edit a Policy for an IAM Service Role.code-pipeline-08-2
  10. On the Review your pipeline page, confirm the choices shown on the page, and then choose Create pipeline.

The pipeline should now start deploying your app to your OpsWorks layer on its own.  Wait for deployment to finish; you’ll know it’s finished when Succeeded is displayed in both the Source and Deploy stages.

code-pipeline-09-2

Step 8: Verifying the app deployment

To verify that AWS CodePipeline deployed the Node.js app to your layer, sign in to the instance you created in step 4. You should be able to see and use the Node.js web app.

  1. On the AWS OpsWorks dashboard, choose the stack and the layer to which you just deployed your app.
  2. In the navigation pane, choose Instances, and then choose the public IP address of your instance to view the web app. The running app will be displayed in a new browser tab.code-pipeline-10-2
  3. To test the app, on the app’s web page, in the Leave a comment text box, type a comment, and then choose Send. The app adds your comment to the web page. You can add more comments to the page, if you like.

code-pipeline-11-2

Wrap-up

You now have a working and fully automated pipeline. As soon as you make changes to your application’s code and update the S3 bucket with the new version of your app, AWS CodePipeline automatically collects the artifact and uses AWS OpsWorks to deploy it to your instance, by running the OpsWorks deployment Chef recipe that you defined on your layer. The deployment recipe starts all of the operations on your instance that are required to support a new version of your artifact.

To learn more about Chef cookbooks and recipes: https://docs.chef.io/cookbooks.html

To learn more about the AWS OpsWorks and AWS CodePipeline integration: https://docs.aws.amazon.com/opsworks/latest/userguide/other-services-cp.html

AWS OpsWorks at re:Invent 2016

Post Syndicated from Daniel Huesch original https://aws.amazon.com/blogs/devops/aws-opsworks-at-reinvent-2016/

AWS re:Invent 2016 is right around the corner. Here’s an overview of where you can meet the AWS OpsWorks team and learn about the service.

DEV305 – Configuration Management in the Cloud
12/1/16 (Thursday) 11:00 AM – Venetian, Level 3, Murano 3205

To ensure that your application operates in a predictable manner in both your test and production environments, you must vigilantly maintain the configuration of your resources. By leveraging configuration management solutions, Dev and Ops engineers can define the state of their resources across their entire lifecycle. In this session, we will show you how to use AWS OpsWorks, AWS CodeDeploy, and AWS CodePipeline to build a reliable and consistent development pipeline that assures your production workloads behave in a predictable manner.

DEV305-R – [REPEAT] Configuration Management in the Cloud
12/2/16 (Friday) 9:00 AM – Venetian, Level 1, Sands 202

This is a repeat session of the talk from the previous day if you were unable to attend that one.

LD148 – Live Demo: Configuration Management with AWS OpsWorks
12/1/16 (Thursday) 4:50 PM – Venetian, Hall C, AWS Booth

Join this session at the AWS Booth for a live demo and the opportunity to meet the AWS OpsWorks service team.

AWS re:Invent is a great opportunity to talk with AWS teams. As in previous years, you will find OpsWorks team members at the AWS booth. Drop by and ask for a demo!

Didn’t register before the conference sold out? All sessions will be recorded and posted on YouTube after the conference and all slide decks will be posted on SlideShare.net.

Deploy an App to an AWS OpsWorks Layer Using AWS CodePipeline

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx2WKWC9RIY0RD8/Deploy-an-App-to-an-AWS-OpsWorks-Layer-Using-AWS-CodePipeline

Deploy an App to an AWS OpsWorks Layer Using AWS CodePipeline

AWS CodePipeline lets you create continuous delivery pipelines that automatically track code changes from sources such as AWS CodeCommit, Amazon S3, or GitHub. Now, you can use AWS CodePipeline as a code change-management solution for apps, Chef cookbooks, and recipes that you want to deploy with AWS OpsWorks.

This blog post demonstrates how you can create an automated pipeline for a simple Node.js app by using AWS CodePipeline and AWS OpsWorks. After you configure your pipeline, every time you update your Node.js app, AWS CodePipeline passes the updated version to AWS OpsWorks. AWS OpsWorks then deploys the updated app to your fleet of instances, leaving you to focus on improving your application. AWS makes sure that the latest version of your app is deployed.

Step 1: Upload app code to an Amazon S3 bucket

The Amazon S3 bucket must be in the same region in which you later create your pipeline in AWS CodePipeline. For now, AWS CodePipeline supports the AWS OpsWorks provider in the us-east-1 region only; all resources in this blog post should be created in the US East (N. Virginia) region. The bucket must also be versioned, because AWS CodePipeline requires a versioned source. For more information, see Using Versioning.

Upload your app to an Amazon S3 bucket

  1. Download a ZIP file of the AWS OpsWorks sample, Node.js app, and save it to a convenient location on your local computer: https://s3.amazonaws.com/opsworks-codepipeline-demo/opsworks-nodejs-demo-app.zip.
  2. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. Choose Create Bucket. Be sure to enable versioning.
  3. Choose the bucket that you created and upload the ZIP file that you saved in step 1.


     

  4. In the Properties pane for the uploaded ZIP file, make a note of the S3 link to the file. You will need the bucket name and the ZIP file name portion of this link to create your pipeline.

Step 2: Create an AWS OpsWorks to Amazon EC2 service role

1.     Go to the Identity and Access Management (IAM) service console, and choose Roles.
2.     Choose Create Role, and name it aws-opsworks-ec2-role-with-s3.
3.     In the AWS Service Roles section, choose Amazon EC2, and then choose the policy called AmazonS3ReadOnlyAccess.
4.     The new role should appear in the Roles dashboard.

Step 3: Create an AWS OpsWorks Chef 12 Linux stack

To use AWS OpsWorks as a provider for a pipeline, you must first have an AWS OpsWorks stack, a layer, and at least one instance in the layer. As a reminder, the Amazon S3 bucket to which you uploaded your app must be in the same region in which you later create your AWS OpsWorks stack and pipeline, US East (N. Virginia).

1.     In the OpsWorks console, choose Add Stack, and then choose a Chef 12 stack.
2.     Set the stack’s name to CodePipeline Demo and make sure the Default operating system is set to Linux.
3.     Enable Use custom Chef cookbooks.
4.     For Repository type, choose HTTP Archive, and then use the following cookbook repository on S3: https://s3.amazonaws.com/opsworks-codepipeline-demo/opsworks-nodejs-demo-cookbook.zip. This repository contains a set of Chef cookbooks that include Chef recipes you’ll use to install the Node.js package and its dependencies on your instance. You will use these Chef recipes to deploy the Node.js app that you prepared in step 1.1.

 Step 4: Create and configure an AWS OpsWorks layer

Now that you’ve created an AWS OpsWorks stack called CodePipeline Demo, you can create an OpsWorks layer.

1.     Choose Layers, and then choose Add Layer in the AWS OpsWorks stack view.
2.     Name the layer Node.js App Server. For Short Name, type app1, and then choose Add Layer.
3.     After you create the layer, open the layer’s Recipes tab. In the Deploy lifecycle event, type nodejs_demo. Later, you will link this to a Chef recipe that is part of the Chef cookbook you referenced when you created the stack in step 3.4. This Chef recipe runs every time a new version of your application is deployed.

4.     Now, open the Security tab, choose Edit, and choose AWS-OpsWorks-WebApp from the Security groups drop-down list. You will also need to set the EC2 Instance Profile to use the service role you created in step 2.2 (aws-opsworks-ec2-role-with-s3).

Step 5: Add your App to AWS OpsWorks

Now that your layer is configured, add the Node.js demo app to your AWS OpsWorks stack. When you create the pipeline, you’ll be required to reference this demo Node.js app.

  1. Have the Amazon S3 bucket link from the step 1.4 ready. You will need the link to the bucket in which you stored your test app.
  2. In AWS OpsWorks, open the stack you created (CodePipeline Demo), and in the navigation pane, choose Apps.
  3. Choose Add App.
  4. Provide a name for your demo app (for example, Node.js Demo App), and set the Repository type to an S3 Archive. Paste your S3 bucket link (s3://bucket-name/file name) from step 1.4.
  5. Now that your app appears in the list on the Apps page, add an instance to your OpsWorks layer.

 Step 6: Add an instance to your AWS OpsWorks layer

Before you create a pipeline in AWS CodePipeline, set up at least one instance within the layer you defined in step 4.

  1. Open the stack that you created (CodePipeline Demo), and in the navigation pane, choose Instances.
  2. Choose +Instance, and accept the default settings, including the hostname, size, and subnet. Choose Add Instance.

  1. By default, the instance is in a stopped state. Choose start to start the instance.

Step 7: Create a pipeline in AWS CodePipeline

Now that you have a stack and an app configured in AWS OpsWorks, create a pipeline with AWS OpsWorks as the provider to deploy your app to your specified layer. If you update your app or your Chef deployment recipes, the pipeline runs again automatically, triggering the deployment recipe to run and deploy your updated app.

This procedure creates a simple pipeline that includes only one Source and one Deploy stage. However, you can create more complex pipelines that use AWS OpsWorks as a provider.

To create a pipeline

  1. Open the AWS CodePipeline console in the U.S. East (N. Virginia) region.
  2. Choose Create pipeline.
  3. On the Getting started with AWS CodePipeline page, type MyOpsWorksPipeline, or a pipeline name of your choice, and then choose Next step.
  4. On the Source Location page, choose Amazon S3 from the Source provider drop-down list.
  5. In the Amazon S3 details area, type the Amazon S3 bucket path to your application, in the format s3://bucket-name/file name. Refer to the link you noted in step 1.4. Choose Next step.
  6. On the Build page, choose No Build from the drop-down list, and then choose Next step.
  7. On the Deploy page, choose AWS OpsWorks as the deployment provider.


     

  8. Specify the names of the stack, layer, and app that you created earlier, then choose Next step.
  9. On the AWS Service Role page, choose Create Role. On the IAM console page that opens, you will see the role that will be created for you (AWS-CodePipeline-Service). From the Policy Name drop-down list, choose Create new policy. Be sure the policy document has the following content, and then choose Allow.
    For more information about the service role and its policy statement, see Attach or Edit a Policy for an IAM Service Role.


     

  10. On the Review your pipeline page, confirm the choices shown on the page, and then choose Create pipeline.

The pipeline should now start deploying your app to your OpsWorks layer on its own.  Wait for deployment to finish; you’ll know it’s finished when Succeeded is displayed in both the Source and Deploy stages.

Step 8: Verifying the app deployment

To verify that AWS CodePipeline deployed the Node.js app to your layer, sign in to the instance you created in step 4. You should be able to see and use the Node.js web app.

  1. On the AWS OpsWorks dashboard, choose the stack and the layer to which you just deployed your app.
  2. In the navigation pane, choose Instances, and then choose the public IP address of your instance to view the web app. The running app will be displayed in a new browser tab.


     

  3. To test the app, on the app’s web page, in the Leave a comment text box, type a comment, and then choose Send. The app adds your comment to the web page. You can add more comments to the page, if you like.

Wrap up

You now have a working and fully automated pipeline. As soon as you make changes to your application’s code and update the S3 bucket with the new version of your app, AWS CodePipeline automatically collects the artifact and uses AWS OpsWorks to deploy it to your instance, by running the OpsWorks deployment Chef recipe that you defined on your layer. The deployment recipe starts all of the operations on your instance that are required to support a new version of your artifact.

To learn more about Chef cookbooks and recipes: https://docs.chef.io/cookbooks.html

To learn more about the AWS OpsWorks and AWS CodePipeline integration: https://docs.aws.amazon.com/opsworks/latest/userguide/other-services-cp.html

Chef-Owned Community Cookbooks Now Require 12.1 or Later

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx3TX1EYJ6IPFI/Chef-Owned-Community-Cookbooks-Now-Require-12-1-or-Later

Chef recently announced that all of its community cookbooks now require, at minimum, Chef 12.1. This recent change allows users to benefit from features like multi-package installation, new Ohai plugin data, and custom resources that are available only on Chef 12.

If you are an AWS OpsWorks Chef 11 user, you can:

·       Pin the major version of the community cookbooks to retain cookbook compatibility.
·       Upgrade to Chef 12 and enjoy the latest community enhancements.

If you are an AWS OpsWorks Chef 11 user who uses the built-in layers of OpsWorks, you can continue to operate your stacks without any disruption or changes.

OpsWorks September Enhancements Blog Post

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx38UMHBUUN5DX2/OpsWorks-September-Enhancements-Blog-Post

Over the past few months, the AWS OpsWorks team has introduced several enhancements to existing features and added to support for new one. Let’s discuss some of these new capabilities.

·       Chef client 12.13.37 – Released a new AWS OpsWorks agent version for Chef 12 for Linux, enabling the latest enhancements from Chef. The OpsWorks console now shows the full history of enhancements to its agent software. Here’s an example of what the change log looks like:

·       Node.js 0.12.15 – Provided support for a new version of Node.js, in Chef 11.

–        Fixes a bug in the read/write locks implementation for the Windows operating system.
–        Fixes a potential buffer overflow vulnerability.

·       Ruby 2.3.1 – The built-in Chef 11 Ruby layer now supports Ruby 2.3.1, which includes these Ruby enhancements:

–        Introduced a frozen string literal pragma.
–        Introduced a safe navigation operator (lonely operator).
–        Numerous performance improvements.

·       Larger EBS volumes – Following the recent announcement from Amazon EBS, you can now use OpsWorks to create provisioned IOPS volumes that store up to 16 TB and process up to 20,000 IOPS, with a maximum throughput of 320 MBps. You can also create general purpose volumes that store up to 16 TB and process up to 10,000 IOPS, with a maximum throughput of 160 MBps.

·       New Linux operating systems – OpsWorks continues to enhance its operating system support and now offers:

–        Amazon Linux 2016.03 (Amazon Linux 2016.09 support will be available soon)
–        Ubuntu 16.04
–        CentOS 7

·       Instance tenancy – You can provision dedicated instances through OpsWorks. Dedicated instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that’s dedicated to a single customer. Your dedicated instances are physically isolated at the host hardware level from instances that belong to other AWS accounts.

·       Define root volumes – You can define the size of the root volume of your EBS-backed instances directly from the OpsWorks console. Choose from a variety of volume types: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic.

·       Instance page – The OpsWorks instance page now displays a summary bar that indicates the aggregated state of all the instances in a selected stack. Summary fields include total instance count, online instances, instances that are in the setting-up stage, instances that are in the shutting-down stage, stopped instances, and instances in an error state.

·       Service role regeneration – You can now use the OpsWorks console to recreate your IAM service role if it was deleted.

Recreate IAM service role

Confirmation of IAM service role creation

As always, we welcome your feedback about features you’re using in OpsWorks. Be sure to visit the OpsWorks user forums, and check out the documentation.

 

 

OpsWorks September 2016 Updates

Post Syndicated from Daniel Huesch original https://aws.amazon.com/blogs/devops/opsworks-september-2016-updates/

Over the past few months, the AWS OpsWorks team has introduced several enhancements to existing features and added to support for new one. Let’s discuss some of these new capabilities.

·       Chef client 12.13.37 – Released a new AWS OpsWorks agent version for Chef 12 for Linux, enabling the latest enhancements from Chef. The OpsWorks console now shows the full history of enhancements to its agent software. Here’s an example of what the change log looks like:

·       Node.js 0.12.15 – Provided support for a new version of Node.js, in Chef 11.

–        Fixes a bug in the read/write locks implementation for the Windows operating system.
–        Fixes a potential buffer overflow vulnerability.

·       Ruby 2.3.1 – The built-in Chef 11 Ruby layer now supports Ruby 2.3.1, which includes these Ruby enhancements:

–        Introduced a frozen string literal pragma.
–        Introduced a safe navigation operator (lonely operator).
–        Numerous performance improvements.

·       Larger EBS volumes – Following the recent announcement from Amazon EBS, you can now use OpsWorks to create provisioned IOPS volumes that store up to 16 TB and process up to 20,000 IOPS, with a maximum throughput of 320 MBps. You can also create general purpose volumes that store up to 16 TB and process up to 10,000 IOPS, with a maximum throughput of 160 MBps.

·       New Linux operating systems – OpsWorks continues to enhance its operating system support and now offers:

–        Amazon Linux 2016.03 (Amazon Linux 2016.09 support will be available soon)
–        Ubuntu 16.04
–        CentOS 7

·       Instance tenancy – You can provision dedicated instances through OpsWorks. Dedicated instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that’s dedicated to a single customer. Your dedicated instances are physically isolated at the host hardware level from instances that belong to other AWS accounts.

·       Define root volumes – You can define the size of the root volume of your EBS-backed instances directly from the OpsWorks console. Choose from a variety of volume types: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic.

·       Instance page – The OpsWorks instance page now displays a summary bar that indicates the aggregated state of all the instances in a selected stack. Summary fields include total instance count, online instances, instances that are in the setting-up stage, instances that are in the shutting-down stage, stopped instances, and instances in an error state.

·       Service role regeneration – You can now use the OpsWorks console to recreate your IAM service role if it was deleted.

Recreate IAM service role

Confirmation of IAM service role creation

As always, we welcome your feedback about features you’re using in OpsWorks. Be sure to visit the OpsWorks user forums, and check out the documentation.

 

 

AWS OpsWorks Endpoints Available in 11 Regions

Post Syndicated from Daniel Huesch original https://aws.amazon.com/blogs/devops/aws-opsworks-endpoints-available-in-11-regions/

AWS OpsWorks, a service that helps you configure and operate applications of all shapes and sizes using Chef automation, has just added support for the Asia Pacific (Seoul) Region and launched public endpoints in Frankfurt, Ireland, N. California, Oregon, Sao Paolo, Singapore, Sydney, and Tokyo.

Previously, customers had to manage OpsWorks stacks for these regions using our N. Virginia endpoint. Using an OpsWorks endpoint in the same region as your stack reduces API latencies, improves instance response times, and limits impact from cross-region dependency failures.

A full list of endpoints can be found in AWS Regions and Endpoints.

AWS OpsWorks Endpoints Available in 11 Regions

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx1HUUGTAYRMNU/AWS-OpsWorks-Endpoints-Available-in-11-Regions

AWS OpsWorks, a service that helps you configure and operate applications of all shapes and sizes using Chef automation, has just added support for the Asia Pacific (ICN) Region and launched public endpoints in Frankfurt, Ireland, N. California, Oregon, Sao Paolo, Singapore, Sydney, and Tokyo.

Previously, customers had to manage OpsWorks stacks for these regions using our N. Virginia endpoint. Using an OpsWorks endpoint in the same region as your stack reduces API latencies, improves instance response times, and limits impact from cross-region dependency failures.

A full list of endpoints can be found in AWS Regions and Endpoints.

Auto Scaling AWS OpsWorks Instances

Post Syndicated from Daniel Huesch original https://aws.amazon.com/blogs/devops/auto-scaling-aws-opsworks-instances/

This post will show you how to integrate Auto Scaling groups with AWS OpsWorks so you can leverage  the native scaling capabilities of Amazon EC2 and the OpsWorks Chef configuration management solution.

Auto Scaling ensures you have the correct number of EC2 instances available to handle your application load.  You create collections of EC2 instances (called Auto Scaling groups), specify desired instance ranges for them, and create scaling policies that define when instances are provisioned or removed from the group.

AWS OpsWorks helps configure and manage your applications.  You create groups of EC2 instances (called stacks and layers) and associate to them configuration such as volumes to mount or Chef recipes to execute in response to lifecycle events (for example, startup/shutdown).  The service streamlines the instance provisioning and management process, making it easy to launch uniform fleets using Chef and EC2.

The following steps will show how you can use an Auto Scaling group to manage EC2 instances in an OpsWorks stack.

Integrating Auto Scaling with OpsWorks

This example will require you to create the following resources:

Auto Scaling group: This group is responsible for EC2 instance provisioning and release.

Launch configuration: A configuration template used by the Auto Scaling group to launch instances.

OpsWorks stack: Instances provisioned by the Auto Scaling group will be registered with this stack.

IAM instance profile: This profile grants permission to your instances to register with OpsWorks.

Lambda function: This function handles deregistration of instances from your OpsWorks stack.

SNS topic: This topic triggers your deregistration Lambda function after Auto Scaling terminates an instance.

Step 1: Create an IAM instance profile

When an EC2 instance starts, it must make an API call to register itself with OpsWorks.  By assigning an IAM instance profile to the instance, you can grant it permission to make OpsWorks calls.

Open the IAM console, choose Roles, and then choose Create New Role. Type a name for the role, and then choose Next Step.  Choose the Amazon EC2 Role, and then select the check box next to the AWSOpsWorksInstanceRegistration policy.  Finally, choose Next Step, and then choose Create Role. As the name suggests, the AWSOpsWorksInstanceRegistration policy only allows the API calls required to register an instance. Because you will have to make two more calls for this demo,  add the following inline policy to the new role.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "opsworks:AssignInstance",
                "opsworks:DescribeInstances"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Step 2: Create an OpsWorks stack

Open the AWS OpsWorks console.  Choose the Add Stack button from the dashboard, and then  choose Sample Stack. Make sure the Linux OS option is selected, and  then choose Create Stack.  After the stack has been created, choose Explore the sample stack. Choose the layer named Node.js App Server.  You will need the IDs of this sample stack and layer in a later step.  You can extract both from the URL of the layer page, which uses this format:

https://console.aws.amazon.com/opsworks/home?region=us-west-2#/stack/ YOUR-OPSWORKS-STACK-ID/layers/ YOUR-OPSWORKS-LAYER-ID.

Step 3: Create a Lambda function

This function is responsible for deregistering an instance from your OpsWorks stack.  It will be invoked whenever an EC2 instance in the Auto Scaling group is terminated.

Open the AWS Lambda console and choose the option to create a Lambda function.  If you are prompted to choose a blueprint, choose Skip.  You can give the function any name you like, but be sure to choose the Python 2.7 option from the Runtime drop-down list.

Next, paste the following code into the Lambda Function Code text entry box:

import json
import boto3

def lambda_handler(event, context):
    message = json.loads(event['Records'][0]['Sns']['Message'])
    
    if (message['Event'] == 'autoscaling:EC2_INSTANCE_TERMINATE'):
        ec2_instance_id = message['EC2InstanceId']
        ec2 = boto3.client('ec2')
        for tag in ec2.describe_instances(InstanceIds=[ec2_instance_id])['Reservations'][0]['Instances'][0]['Tags']:
            if (tag['Key'] == 'opsworks_stack_id'):
                opsworks_stack_id = tag['Value']
                opsworks = boto3.client('opsworks', 'us-east-1')
                for instance in opsworks.describe_instances(StackId=opsworks_stack_id)['Instances']:
                    if ('Ec2InstanceId' in instance):
                        if (instance['Ec2InstanceId'] == ec2_instance_id):
                            print("Deregistering OpsWorks instance " + instance['InstanceId'])
                            opsworks.deregister_instance(InstanceId=instance['InstanceId'])
    return message

Then, from the Role drop-down list, choose Basic Execution Role.  On the page that appears, expand  View Policy Document, and then choose Edit

Next, paste the following JSON into the policy text box:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DescribeInstances",
        "opsworks:DescribeInstances",
        "opsworks:DeregisterInstance"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
       "Resource": "arn:aws:logs:*:*:*"
    }
  ]
}

Choose Allow.  On the Lambda creation page, change the Timeout field to 0 minutes and 15 seconds, and choose Next.  Finally, choose Create Function.

Step 4: Create an SNS topic

The SNS topic you create in this step will be responsible for triggering an execution of the Lambda function you created in step 3.  It is the glue that ties Auto Scaling instance terminations to corresponding OpsWorks instance deregistrations.

Open the Amazon SNS console.  Choose Topics, and then choose Create New Topic.  Type topic and display names, and then choose Create Topic.  Select the check box next to the topic you just created, and from Actions, choose Subscribe to Topic.  From the Protocol drop-down list, choose AWS Lambda.  From the Endpoint drop-down list, choose the Lambda function you created in step 3.  Finally, choose Create Subscription.

Step 5: Create a launch configuration

This configuration contains two important settings: security group and user data.  Because you’re deploying a Node.js app that’s will listen on port 80, you must use a security group that has this port open. Then there’s the user data script that’s executed when an instance starts. Here we make the call to register the instance with OpsWorks.

 

 

Open the Amazon EC2 console and create a launch configuration. Use the latest release of Amazon Linux, which should be the first operating system in the list. On the details page, under IAM role, choose the instance profile you created in step 2. Expand the Advanced Details area and paste the following code in the User data field. Because this is a template, you will have to replace YOUR-OPSWORKS-STACK-ID and YOUR-OPSWORKS-LAYER-ID with the OpsWorks stack and layer ID you copied in step 1.

#!/bin/bash
sed -i'' -e 's/.*requiretty.*//' /etc/sudoers
pip install --upgrade awscli
INSTANCE_ID=$(/usr/bin/aws opsworks register --use-instance-profile --infrastructure-class ec2 --region us-east-1 --stack-id YOUR-OPSWORKS-STACK-ID --override-hostname $(tr -cd 'a-z' < /dev/urandom |head -c8) --local 2>&1 |grep -o 'Instance ID: .*' |cut -d' ' -f3)
/usr/bin/aws opsworks wait instance-registered --region us-east-1 --instance-id $INSTANCE_ID
/usr/bin/aws opsworks assign-instance --region us-east-1 --instance-id $INSTANCE_ID --layer-ids YOUR-OPSWORKS-LAYER-ID

Step 6. Create an Auto Scaling group

On the last page of the Launch Configuration wizard, choose Create an Auto Scaling group using this launch configuration. In the notification settings, add a notification to your SNS topic for the terminate event. In the tag settings, add a tag with key opsworks_stack_id. Use the OpsWorks stack ID you entered in the User data field as the value. Make sure the Tag New Instances check box is selected.

Conclusion

Because the default desired size for your Auto Scaling group is 1, a single instance will be started in EC2 immediately.  You can confirm this through the EC2 console in a few seconds:

A few minutes later, the instance will appear in the OpsWorks console:

To confirm your Auto Scaling group instances will be deregistered from OpsWorks on termination, change the Desired value from 1 to 0.  The instance will disappear from the EC2 console. Within minutes, it will disappear from the OpsWorks console, too.

Congratulations! You’ve configured an Auto Scaling group to seamlessly integrate with AWS OpsWorks. Please let us know if this helps you scale instances in OpsWorks or if you have tips of your own.

Auto Scaling AWS OpsWorks Instances

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx1WTGT1S2PXOOV/Auto-Scaling-AWS-OpsWorks-Instances

This post will show you how to integrate Auto Scaling groups with AWS OpsWorks so you can leverage  the native scaling capabilities of Amazon EC2 and the OpsWorks Chef configuration management solution.

Auto Scaling ensures you have the correct number of EC2 instances available to handle your application load.  You create collections of EC2 instances (called Auto Scaling groups), specify desired instance ranges for them, and create scaling policies that define when instances are provisioned or removed from the group.

AWS OpsWorks helps configure and manage your applications.  You create groups of EC2 instances (called stacks and layers) and associate to them configuration such as volumes to mount or Chef recipes to execute in response to lifecycle events (for example, startup/shutdown).  The service streamlines the instance provisioning and management process, making it easy to launch uniform fleets using Chef and EC2.

The following steps will show how you can use an Auto Scaling group to manage EC2 instances in an OpsWorks stack.

Integrating Auto Scaling with OpsWorks

This example will require you to create the following resources:

Auto Scaling group: This group is responsible for EC2 instance provisioning and release.

Launch configuration: A configuration template used by the Auto Scaling group to launch instances.

OpsWorks stack: Instances provisioned by the Auto Scaling group will be registered with this stack.

IAM instance profile: This profile grants permission to your instances to register with OpsWorks.

Lambda function: This function handles deregistration of instances from your OpsWorks stack.

SNS topic: This topic triggers your deregistration Lambda function after Auto Scaling terminates an instance.

Step 1: Create an IAM instance profile

When an EC2 instance starts, it must make an API call to register itself with OpsWorks.  By assigning an IAM instance profile to the instance, you can grant it permission to make OpsWorks calls.

Open the IAM console, choose Roles, and then choose Create New Role. Type a name for the role, and then choose Next Step.  Choose the Amazon EC2 Role, and then select the check box next to the AWSOpsWorksInstanceRegistration policy.  Finally, choose Next Step, and then choose Create Role. As the name suggests, the AWSOpsWorksInstanceRegistration policy only allows the API calls required to register an instance. Because you will have to make two more calls for this demo,  add the following inline policy to the new role.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "opsworks:AssignInstance",
                "opsworks:DescribeInstances"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Step 2: Create an OpsWorks stack

Open the AWS OpsWorks console.  Choose the Add Stack button from the dashboard, and then  choose Sample Stack. Make sure the Linux OS option is selected, and  then choose Create Stack.  After the stack has been created, choose Explore the sample stack. Choose the layer named Node.js App Server.  You will need the IDs of this sample stack and layer in a later step.  You can extract both from the URL of the layer page, which uses this format:

https://console.aws.amazon.com/opsworks/home?region=us-west-2#/stack/ YOUR-OPSWORKS-STACK-ID/layers/ YOUR-OPSWORKS-LAYER-ID.

Step 3: Create a Lambda function

This function is responsible for deregistering an instance from your OpsWorks stack.  It will be invoked whenever an EC2 instance in the Auto Scaling group is terminated.

Open the AWS Lambda console and choose the option to create a Lambda function.  If you are prompted to choose a blueprint, choose Skip.  You can give the function any name you like, but be sure to choose the Python 2.7 option from the Runtime drop-down list.

Next, paste the following code into the Lambda Function Code text entry box:

import json
import boto3

def lambda_handler(event, context):
    message = json.loads(event['Records'][0]['Sns']['Message'])
    
    if (message['Event'] == 'autoscaling:EC2_INSTANCE_TERMINATE'):
        ec2_instance_id = message['EC2InstanceId']
        ec2 = boto3.client('ec2')
        for tag in ec2.describe_instances(InstanceIds=[ec2_instance_id])['Reservations'][0]['Instances'][0]['Tags']:
            if (tag['Key'] == 'opsworks_stack_id'):
                opsworks_stack_id = tag['Value']
                opsworks = boto3.client('opsworks', 'us-east-1')
                for instance in opsworks.describe_instances(StackId=opsworks_stack_id)['Instances']:
                    if ('Ec2InstanceId' in instance):
                        if (instance['Ec2InstanceId'] == ec2_instance_id):
                            print("Deregistering OpsWorks instance " + instance['InstanceId'])
                            opsworks.deregister_instance(InstanceId=instance['InstanceId'])
    return message

Then, from the Role drop-down list, choose Basic Execution Role.  On the page that appears, expand  View Policy Document, and then choose Edit

Next, paste the following JSON into the policy text box:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DescribeInstances",
        "opsworks:DescribeInstances",
        "opsworks:DeregisterInstance"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
       "Resource": "arn:aws:logs:*:*:*"
    }
  ]
}

Choose Allow.  On the Lambda creation page, change the Timeout field to 0 minutes and 15 seconds, and choose Next.  Finally, choose Create Function.

Step 4: Create an SNS topic

The SNS topic you create in this step will be responsible for triggering an execution of the Lambda function you created in step 3.  It is the glue that ties Auto Scaling instance terminations to corresponding OpsWorks instance deregistrations.

Open the Amazon SNS console.  Choose Topics, and then choose Create New Topic.  Type topic and display names, and then choose Create Topic.  Select the check box next to the topic you just created, and from Actions, choose Subscribe to Topic.  From the Protocol drop-down list, choose AWS Lambda.  From the Endpoint drop-down list, choose the Lambda function you created in step 3.  Finally, choose Create Subscription.

Step 5: Create a launch configuration

This configuration contains two important settings: security group and user data.  Because you’re deploying a Node.js app that’s will listen on port 80, you must use a security group that has this port open. Then there’s the user data script that’s executed when an instance starts. Here we make the call to register the instance with OpsWorks.

 

 

Open the Amazon EC2 console and create a launch configuration. Use the latest release of Amazon Linux, which should be the first operating system in the list. On the details page, under IAM role, choose the instance profile you created in step 2. Expand the Advanced Details area and paste the following code in the User data field. Because this is a template, you will have to replace YOUR-OPSWORKS-STACK-ID and YOUR-OPSWORKS-LAYER-ID with the OpsWorks stack and layer ID you copied in step 1.

#!/bin/bash
sed -i'' -e 's/.*requiretty.*//' /etc/sudoers
pip install --upgrade awscli
INSTANCE_ID=$(/usr/bin/aws opsworks register --use-instance-profile --infrastructure-class ec2 --region us-east-1 --stack-id YOUR-OPSWORKS-STACK-ID --local 2>&1 |grep -o 'Instance ID: .*' |cut -d' ' -f3)
/usr/bin/aws opsworks wait instance-registered --region us-east-1 --instance-id $INSTANCE_ID
/usr/bin/aws opsworks assign-instance --region us-east-1 --instance-id $INSTANCE_ID --layer-ids YOUR-OPSWORKS-LAYER-ID

Step 6. Create an Auto Scaling group

On the last page of the Launch Configuration wizard, choose Create an Auto Scaling group using this launch configuration. In the notification settings, add a notification to your SNS topic for the terminate event. In the tag settings, add a tag with key opsworks_stack_id. Use the OpsWorks stack ID you entered in the User data field as the value. Make sure the Tag New Instances check box is selected.

Conclusion

Because the default desired size for your Auto Scaling group is 1, a single instance will be started in EC2 immediately.  You can confirm this through the EC2 console in a few seconds:

A few minutes later, the instance will appear in the OpsWorks console:

To confirm your Auto Scaling group instances will be deregistered from OpsWorks on termination, change the Desired value from 1 to 0.  The instance will disappear from the EC2 console. Within minutes, it will disappear from the OpsWorks console, too.

Congratulations! You’ve configured an Auto Scaling group to seamlessly integrate with AWS OpsWorks. Please let us know if this helps you scale instances in OpsWorks or if you have tips of your own.

Color-Code Your AWS OpsWorks Stacks for Better Instance and Resource Tracking

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/TxIZEC3N4LB1OZ/Color-Code-Your-AWS-OpsWorks-Stacks-for-Better-Instance-and-Resource-Tracking

AWS OpsWorks provides options for organizing your Amazon EC2 instances and other AWS resources. There are stacks to group related resources and isolate them from each other; layers to group instances with similar roles; and apps to organize software deployments. Each has a name to help you keep track of them.

Because it can be difficult to see if the instance you’re working on belongs to the right stack (for example, an integration or production stack) just by looking at the host name, OpsWorks provides a simple, user-defined attribute that you can use to color-code your stacks. For example, some customers use red for their production stacks. Others apply different colors to correspond to the regions in which the stacks are operating.

A stack color is simply a visual indicator to assist you while you’re working in the console. In those cases when you need to sign in to an instance (for auditing, for example, or to check log files or restart a process), it can be difficult to immediately detect when you have signed in to an instance on the wrong stack.

When you add a small, custom recipe to the setup lifecycle event, however, you can reuse the stack color for the shell prompt. Most modern terminal emulators support a 256-color mode. Changing the color of the prompt is simple.

The following code can be used to change the color of the shell prompt:

colors/recipes/default.rb

stack = search("aws_opsworks_stack").first
match = stack["color"].match(/rgb((d+), (d+), (d+))/)
r, g, b = match[1..3].map { |i| (5 * i.to_f / 255).round }

template "/etc/profile.d/opsworks-color-prompt.sh" do
source "opsworks-color-prompt.sh.erb"
variables(:color => 16 + b + g * 6 + 36 * r)
end

colors/templates/default/opsworks-color-prompt.sh.erb

if [ -n "$PS1" ]; then
PS1="33[38;5;<%= @color %>m[[email protected] W]\$33[0m "
fi

You can use this with Chef 12, this custom cookbook, the latest Amazon Linux AMI, and Bash. You may have to adapt the cookbook for other operating systems and shells.

The stack color is not the only information you can include in the prompt. You can also add the stack and layer names of your instances to the prompt:

We invite you to try color-coding your stacks. If you have questions or other feedback, let us know in the comments.

Color-Code Your AWS OpsWorks Stacks for Better Instance and Resource Tracking

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/TxIZEC3N4LB1OZ/Color-Code-Your-AWS-OpsWorks-Stacks-for-Better-Instance-and-Resource-Tracking

AWS OpsWorks provides options for organizing your Amazon EC2 instances and other AWS resources. There are stacks to group related resources and isolate them from each other; layers to group instances with similar roles; and apps to organize software deployments. Each has a name to help you keep track of them.

Because it can be difficult to see if the instance you’re working on belongs to the right stack (for example, an integration or production stack) just by looking at the host name, OpsWorks provides a simple, user-defined attribute that you can use to color-code your stacks. For example, some customers use red for their production stacks. Others apply different colors to correspond to the regions in which the stacks are operating.

A stack color is simply a visual indicator to assist you while you’re working in the console. In those cases when you need to sign in to an instance (for auditing, for example, or to check log files or restart a process), it can be difficult to immediately detect when you have signed in to an instance on the wrong stack.

When you add a small, custom recipe to the setup lifecycle event, however, you can reuse the stack color for the shell prompt. Most modern terminal emulators support a 256-color mode. Changing the color of the prompt is simple.

The following code can be used to change the color of the shell prompt:

colors/recipes/default.rb

stack = search("aws_opsworks_stack").first
match = stack["color"].match(/rgb((d+), (d+), (d+))/)
r, g, b = match[1..3].map { |i| (5 * i.to_f / 255).round }

template "/etc/profile.d/opsworks-color-prompt.sh" do
source "opsworks-color-prompt.sh.erb"
variables(:color => 16 + b + g * 6 + 36 * r)
end

colors/templates/default/opsworks-color-prompt.sh.erb

if [ -n "$PS1" ]; then
PS1="33[38;5;<%= @color %>m[[email protected] W]\$33[0m "
fi

You can use this with Chef 12, this custom cookbook, the latest Amazon Linux AMI, and Bash. You may have to adapt the cookbook for other operating systems and shells.

The stack color is not the only information you can include in the prompt. You can also add the stack and layer names of your instances to the prompt:

We invite you to try color-coding your stacks. If you have questions or other feedback, let us know in the comments.

Color-Code Your AWS OpsWorks Stacks for Better Instance and Resource Tracking

Post Syndicated from Daniel Huesch original https://aws.amazon.com/blogs/devops/color-code-your-aws-opsworks-stacks-for-better-instance-and-resource-tracking/

AWS OpsWorks provides options for organizing your Amazon EC2 instances and other AWS resources. There are stacks to group related resources and isolate them from each other; layers to group instances with similar roles; and apps to organize software deployments. Each has a name to help you keep track of them.

Because it can be difficult to see if the instance you’re working on belongs to the right stack (for example, an integration or production stack) just by looking at the host name, OpsWorks provides a simple, user-defined attribute that you can use to color-code your stacks. For example, some customers use red for their production stacks. Others apply different colors to correspond to the regions in which the stacks are operating.

A stack color is simply a visual indicator to assist you while you’re working in the console. In those cases when you need to sign in to an instance (for auditing, for example, or to check log files or restart a process), it can be difficult to immediately detect when you have signed in to an instance on the wrong stack.

When you add a small, custom recipe to the setup lifecycle event, however, you can reuse the stack color for the shell prompt. Most modern terminal emulators support a 256-color mode. Changing the color of the prompt is simple.

The following code can be used to change the color of the shell prompt:

colors/recipes/default.rb

stack = search("aws_opsworks_stack").first
match = stack["color"].match(/rgb((d+), (d+), (d+))/)
r, g, b = match[1..3].map { |i| (5 * i.to_f / 255).round }

template "/etc/profile.d/opsworks-color-prompt.sh" do
  source "opsworks-color-prompt.sh.erb"
  variables(:color => 16 + b + g * 6 + 36 * r)
end

colors/templates/default/opsworks-color-prompt.sh.erb

if [ -n "$PS1" ]; then
  PS1="33[38;5;<%= @color %>m[[email protected] W]\$33[0m "
fi

You can use this with Chef 12, this custom cookbook, the latest Amazon Linux AMI, and Bash. You may have to adapt the cookbook for other operating systems and shells.

The stack color is not the only information you can include in the prompt. You can also add the stack and layer names of your instances to the prompt:

We invite you to try color-coding your stacks. If you have questions or other feedback, let us know in the comments.

Quickly Explore the Chef Environment in AWS OpsWorks

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx2G9WMG709HG3L/Quickly-Explore-the-Chef-Environment-in-AWS-OpsWorks

AWS OpsWorks recently launched support for Chef 12 Linux. This release changes the way that information about the stacks, layers, and instances provided by OpsWorks is made available during a Chef run. In this post, I show how to interactively explore this information using the OpsWorks agent command line interface (CLI) and Pry, a shell for Ruby. Our documentation shows you what’s available, this post shows you how to explore that data interactively.

OpsWorks manages EC2 or on-premises instances by triggering Chef runs. Before running your Chef recipes, OpsWorks prepares an environment. This environment includes a number of data bags that provide information about your stack, instances, and other resources in your stack. You can use data bags to write cookbooks that adapt to changes in your infrastructure.

When an instance has finished its setup or when it leaves the online state, OpsWorks triggers a Configure event. You can register your own custom recipes to run during Configure events, and use a custom recipe as a light-weight service discovery mechanism. For example, you could use custom recipes to grant database access to an app server after it’s started, or revoke access after it’s stopped, or discover the IP address of the database server within your stack.

Typically, you access data about stacks, layers, and instances through Chef search. For earlier supported versions of Chef on Linux, this data was made available as attributes. In Chef 12 Linux, the data is available in data bags.

To access this data, I’m going to use only tools that are already present on OpsWorks instances: the OpsWorks agent CLI and Pry. Here’s the elevator pitch for Pry, taken from the Pry website:

Pry is a powerful alternative to the standard IRB shell for Ruby. It features syntax highlighting, a flexible plugin architecture, runtime invocation and source and documentation browsing.

Because Pry is already present on OpsWorks instances, there’s no need to install it.

I execute all terminal commands shown in the rest of this post as the root user.

How Do You Use Pry with OpsWorks?

First, let’s take a look at the OpsWorks agent CLI. The agent CLI lets you explore and repeat Chef runs on an instance.

To see a list of completed runs, use opsworks-agent-cli list:

[[email protected] ~]# opsworks-agent-cli list
2015-12-16T13:37:2 setup
2015-12-16T13:40:56 configure

For an instance that has just finished booting, you should see a successful Setup event, followed by a successful Configure event.

Let’s repeat the Chef run for the Configure event. To repeat the last run, use opsworks-agent-cli run:

[[email protected] ~]# opsworks-agent-cli run
[2015-12-16 13:44:55] INFO [opsworks-agent(26261)]: About to re-run ‘configure’ from 2015-12-16T13:40:56

[2015-12-16 13:45:01]  INFO [opsworks-agent(26261)]: Finished Chef run with exitcode 0

Because the agent CLI can only repeat Chef runs, it doesn’t allow me to execute arbitrary recipes. I can do that in the OpsWorks console with the Run command. For demo purposes, I’ll use a custom cookbook named explore-opsworks-data to trigger a Chef run so I can then execute a recipe during the run.

The Chef run failed because I tried to execute a recipe that doesn’t exist. Let’s create and run the recipe and do it in a way that opens up a Pry session.

[[email protected] ~]# mkdir -p /var/chef/cookbooks/explore-opsworks-data/recipes
[[email protected] ~]# echo ‘require "pry"; binding.pry’ > /var/chef/cookbooks/explore-opsworks-data/recipes/default.rb
[[email protected] ~]# opsworks-agent-cli run

[2015-12-16T13:55:32+00:00] INFO: Storing updated cookbooks/explore-opsworks-data/recipes/default.rb in the cache.
From: /var/chef/runs/35e8a98a-c81e-46a9-84e3-1bbd105f07dd/local-mode-cache/cache/cookbooks/explore-opsworks-data/recipes/default.rb @ line 1 Chef::Mixin::FromFile#from_file:
=> 1: require "pry"; binding.pry

That doesn’t look very good. In fact, the output appears truncated. That’s because I’m now using an interactive shell, Pry, right in the middle of the Chef run. But, I can now use Pry to run arbitrary Ruby code within the recipe I created. I’ll try searching on the data bags for the stack, layer, and instance.

The aws_opsworks_stack data bag contains details about the stack, like the region and the custom cookbook source, as shown in the following example:

search(:aws_opsworks_stack)
=> [{"data_bag_item(‘aws_opsworks_stack’, ‘8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8’)"=>
{"arn"=>"arn:aws:opsworks:us-west-2:153700967203:stack/8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8/",
"custom_cookbooks_source"=>{"type"=>"archive", "url"=>"https://s3.amazonaws.com/opsworks-demo-assets/opsworks-linux-demo-cookbooks-nodejs.tar.gz", "username"=>nil, "password"=>nil, "ssh_key"=>nil, "revision"=>nil},
"name"=>"My Sample Stack (Linux)",

"data_bag"=>"aws_opsworks_stack"}}]

The aws_opsworks_layer data bag contains details about layers, like the layer name and Amazon Elastic Block Store (Amazon EBS) volume configurations:

search(:aws_opsworks_layer)
=> [{"data_bag_item(‘aws_opsworks_layer’, ‘nodejs-server’)"=>
{"layer_id"=>"a8127c0d-749a-4192-aad7-8e512c8942b4", "name"=>"Node.js App Server", "packages"=>[], "shortname"=>"nodejs-server", "type"=>"custom", "volume_configurations"=>[], "id"=>"nodejs-server", "chef_type"=>"data_bag_item", "data_bag"=>"aws_opsworks_layer"}}]

The aws_opsworks_instance data bag contains details about instances, like the operating system and IP addresses:

search(:aws_opsworks_instance)
=> [{"data_bag_item(‘aws_opsworks_instance’, ‘nodejs-server1’)"=>
{"ami_id"=>"ami-d93622b8",
"architecture"=>"x86_64",

"id"=>"nodejs-server1",
"chef_type"=>"data_bag_item",
"data_bag"=>"aws_opsworks_instance"}}]

Now I’ll access a data bag directly. As the following example shows, the data I get this way is identical to the data the search command returns:

data_bag("aws_opsworks_stack")
=> ["8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8"]
data_bag_item("aws_opsworks_stack", "8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8")
=> {"data_bag_item(‘aws_opsworks_stack’, ‘8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8’)"=>
{"arn"=>"arn:aws:opsworks:us-west-2:153700967203:stack/8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8/",
"custom_cookbooks_source"=>{"type"=>"archive", "url"=>"https://s3.amazonaws.com/opsworks-demo-assets/opsworks-linux-demo-cookbooks-nodejs.tar.gz", "username"=>nil, "password"=>nil, "ssh_key"=>nil, "revision"=>nil},
"name"=>"My Sample Stack (Linux)",

"data_bag"=>"aws_opsworks_stack"}}

As a practical example of how I would use search in one of my recipes, I’ll look up the current instance’s root device type and layer ID:

myself = search(:aws_opsworks_instance, "self:true").first

Chef::Log.info "My root device type is #{myself[‘root_device_type’]}"
[2015-12-16T18:19:55+00:00] INFO: My root device type is ebs

Chef::Log.info "I am a member of layer #{myself[‘layer_ids’].first}"
[2015-12-16T18:20:17+00:00] INFO: I am a member of layer a8127c0d-749a-4192-aad7-8e512c8942b4

And just to make it clear that this shell isn’t just about Chef, but about Ruby code in general, here’s a Ruby snippet that would list all files and directories below /tmp, without using Chef:

Dir.glob("/tmp/*")
=> ["/tmp/npm-1967-e4f411bc", "/tmp/hsperfdata_root"]

After I’m done exploring, I can leave the shell by typing exit or by pressing Ctrl+D.

Summary

By using Pry in the middle of a Chef run, you can inspect the data that’s available during the run. If you’re troubleshooting a failed run by making a change on your workstation, updating cookbooks on your instance, and triggering another deployment, using this approach can save you a significant amount of time.

There’s no need to limit yourself to a single Pry session. If there are more areas in your code you need to explore, just put binding.pry in the appropriate place in your cookbook. Keep in mind, though, that you don’t want to permanently include this in your recipe, so don’t put this kind of a change under version control.

Quickly Explore the Chef Environment in AWS OpsWorks

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx2G9WMG709HG3L/Quickly-Explore-the-Chef-Environment-in-AWS-OpsWorks

AWS OpsWorks recently launched support for Chef 12 Linux. This release changes the way that information about the stacks, layers, and instances provided by OpsWorks is made available during a Chef run. In this post, I show how to interactively explore this information using the OpsWorks agent command line interface (CLI) and Pry, a shell for Ruby. Our documentation shows you what’s available, this post shows you how to explore that data interactively.

OpsWorks manages EC2 or on-premises instances by triggering Chef runs. Before running your Chef recipes, OpsWorks prepares an environment. This environment includes a number of data bags that provide information about your stack, instances, and other resources in your stack. You can use data bags to write cookbooks that adapt to changes in your infrastructure.

When an instance has finished its setup or when it leaves the online state, OpsWorks triggers a Configure event. You can register your own custom recipes to run during Configure events, and use a custom recipe as a light-weight service discovery mechanism. For example, you could use custom recipes to grant database access to an app server after it’s started, or revoke access after it’s stopped, or discover the IP address of the database server within your stack.

Typically, you access data about stacks, layers, and instances through Chef search. For earlier supported versions of Chef on Linux, this data was made available as attributes. In Chef 12 Linux, the data is available in data bags.

To access this data, I’m going to use only tools that are already present on OpsWorks instances: the OpsWorks agent CLI and Pry. Here’s the elevator pitch for Pry, taken from the Pry website:

Pry is a powerful alternative to the standard IRB shell for Ruby. It features syntax highlighting, a flexible plugin architecture, runtime invocation and source and documentation browsing.

Because Pry is already present on OpsWorks instances, there’s no need to install it.

I execute all terminal commands shown in the rest of this post as the root user.

How Do You Use Pry with OpsWorks?

First, let’s take a look at the OpsWorks agent CLI. The agent CLI lets you explore and repeat Chef runs on an instance.

To see a list of completed runs, use opsworks-agent-cli list:

[[email protected] ~]# opsworks-agent-cli list
2015-12-16T13:37:2 setup
2015-12-16T13:40:56 configure

For an instance that has just finished booting, you should see a successful Setup event, followed by a successful Configure event.

Let’s repeat the Chef run for the Configure event. To repeat the last run, use opsworks-agent-cli run:

[[email protected] ~]# opsworks-agent-cli run
[2015-12-16 13:44:55] INFO [opsworks-agent(26261)]: About to re-run ‘configure’ from 2015-12-16T13:40:56

[2015-12-16 13:45:01]  INFO [opsworks-agent(26261)]: Finished Chef run with exitcode 0

Because the agent CLI can only repeat Chef runs, it doesn’t allow me to execute arbitrary recipes. I can do that in the OpsWorks console with the Run command. For demo purposes, I’ll use a custom cookbook named explore-opsworks-data to trigger a Chef run so I can then execute a recipe during the run.

The Chef run failed because I tried to execute a recipe that doesn’t exist. Let’s create and run the recipe and do it in a way that opens up a Pry session.

[[email protected] ~]# mkdir -p /var/chef/cookbooks/explore-opsworks-data/recipes
[[email protected] ~]# echo ‘require "pry"; binding.pry’ > /var/chef/cookbooks/explore-opsworks-data/recipes/default.rb
[[email protected] ~]# opsworks-agent-cli run

[2015-12-16T13:55:32+00:00] INFO: Storing updated cookbooks/explore-opsworks-data/recipes/default.rb in the cache.
From: /var/chef/runs/35e8a98a-c81e-46a9-84e3-1bbd105f07dd/local-mode-cache/cache/cookbooks/explore-opsworks-data/recipes/default.rb @ line 1 Chef::Mixin::FromFile#from_file:
=> 1: require "pry"; binding.pry

That doesn’t look very good. In fact, the output appears truncated. That’s because I’m now using an interactive shell, Pry, right in the middle of the Chef run. But, I can now use Pry to run arbitrary Ruby code within the recipe I created. I’ll try searching on the data bags for the stack, layer, and instance.

The aws_opsworks_stack data bag contains details about the stack, like the region and the custom cookbook source, as shown in the following example:

search(:aws_opsworks_stack)
=> [{"data_bag_item(‘aws_opsworks_stack’, ‘8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8’)"=>
{"arn"=>"arn:aws:opsworks:us-west-2:153700967203:stack/8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8/",
"custom_cookbooks_source"=>{"type"=>"archive", "url"=>"https://s3.amazonaws.com/opsworks-demo-assets/opsworks-linux-demo-cookbooks-nodejs.tar.gz", "username"=>nil, "password"=>nil, "ssh_key"=>nil, "revision"=>nil},
"name"=>"My Sample Stack (Linux)",

"data_bag"=>"aws_opsworks_stack"}}]

The aws_opsworks_layer data bag contains details about layers, like the layer name and Amazon Elastic Block Store (Amazon EBS) volume configurations:

search(:aws_opsworks_layer)
=> [{"data_bag_item(‘aws_opsworks_layer’, ‘nodejs-server’)"=>
{"layer_id"=>"a8127c0d-749a-4192-aad7-8e512c8942b4", "name"=>"Node.js App Server", "packages"=>[], "shortname"=>"nodejs-server", "type"=>"custom", "volume_configurations"=>[], "id"=>"nodejs-server", "chef_type"=>"data_bag_item", "data_bag"=>"aws_opsworks_layer"}}]

The aws_opsworks_instance data bag contains details about instances, like the operating system and IP addresses:

search(:aws_opsworks_instance)
=> [{"data_bag_item(‘aws_opsworks_instance’, ‘nodejs-server1’)"=>
{"ami_id"=>"ami-d93622b8",
"architecture"=>"x86_64",

"id"=>"nodejs-server1",
"chef_type"=>"data_bag_item",
"data_bag"=>"aws_opsworks_instance"}}]

Now I’ll access a data bag directly. As the following example shows, the data I get this way is identical to the data the search command returns:

data_bag("aws_opsworks_stack")
=> ["8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8"]
data_bag_item("aws_opsworks_stack", "8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8")
=> {"data_bag_item(‘aws_opsworks_stack’, ‘8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8’)"=>
{"arn"=>"arn:aws:opsworks:us-west-2:153700967203:stack/8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8/",
"custom_cookbooks_source"=>{"type"=>"archive", "url"=>"https://s3.amazonaws.com/opsworks-demo-assets/opsworks-linux-demo-cookbooks-nodejs.tar.gz", "username"=>nil, "password"=>nil, "ssh_key"=>nil, "revision"=>nil},
"name"=>"My Sample Stack (Linux)",

"data_bag"=>"aws_opsworks_stack"}}

As a practical example of how I would use search in one of my recipes, I’ll look up the current instance’s root device type and layer ID:

myself = search(:aws_opsworks_instance, "self:true").first

Chef::Log.info "My root device type is #{myself[‘root_device_type’]}"
[2015-12-16T18:19:55+00:00] INFO: My root device type is ebs

Chef::Log.info "I am a member of layer #{myself[‘layer_ids’].first}"
[2015-12-16T18:20:17+00:00] INFO: I am a member of layer a8127c0d-749a-4192-aad7-8e512c8942b4

And just to make it clear that this shell isn’t just about Chef, but about Ruby code in general, here’s a Ruby snippet that would list all files and directories below /tmp, without using Chef:

Dir.glob("/tmp/*")
=> ["/tmp/npm-1967-e4f411bc", "/tmp/hsperfdata_root"]

After I’m done exploring, I can leave the shell by typing exit or by pressing Ctrl+D.

Summary

By using Pry in the middle of a Chef run, you can inspect the data that’s available during the run. If you’re troubleshooting a failed run by making a change on your workstation, updating cookbooks on your instance, and triggering another deployment, using this approach can save you a significant amount of time.

There’s no need to limit yourself to a single Pry session. If there are more areas in your code you need to explore, just put binding.pry in the appropriate place in your cookbook. Keep in mind, though, that you don’t want to permanently include this in your recipe, so don’t put this kind of a change under version control.

Using Custom JSON on AWS OpsWorks Layers

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx2064HZ903DH8O/Using-Custom-JSON-on-AWS-OpsWorks-Layers

Custom JSON, which has always been available on AWS OpsWorks stacks and deployments, is now also available as a property on layers in stacks using Chef versions 11.10, 12, and 12.2.

In this post I show how you can use custom JSON to adapt a single Chef cookbook to support different use cases on individual layers. To demonstrate, I use the example of a MongoDB setup with multiple shards.

In OpsWorks, each instance belongs to one or more layers, which in turn make up a stack. You use layers to specify details about which Chef cookbooks are run when the instances are set up and configured, among other things. When your stacks have instances that serve different purposes, you use different cookbooks for each.

Sometimes, however, there are only small differences between the layers and they don’t justify using separate cookbooks. For example, when you have a large MongoDB installation with multiple shards, you would have a layer per shard, as shown in the following figure, but your cookbooks wouldn’t necessarily differ.

custom-json-per-layer-1.png

Let’s assume I’m using the community cookbook for MongoDB. I would configure this cookbook using attributes. The attribute for setting the shard name would be node[:mongodb][:shard_name]. But let’s say that I want to set a certain attribute for any deployment to any instance in a given layer. I would use custom JSON to set the that attribute.

When declared on a stack, custom JSON always applies to all instances, no matter which layer they’re in. Custom JSON declared on a deployment is helpful for one-off deployments with special settings; but, the provided custom JSON doesn’t stick to the instances you deploy to, so a subsequent deployment doesn’t know about custom JSON you might have specified in an earlier deployment.

Custom JSON declared on the layer applies to each instance that belongs to that layer. Like custom JSON declared on the stack, it’s permanently stored and applied to all subsequent deployments. So you just need to edit each layer and set the right shard, as shown in the following figure:

custom-json-per-layer-2.png

During a Chef run, OpsWorks makes custom JSON contents available as attributes. That way the settings are available in the MongoDB cookbook and configure the MongoDB server accordingly. For details about using custom JSON content as an attribute, see our documentation.

Custom JSON declared on the deployment overrides custom JSON declared on the stack. Custom JSON declared on the layer sits in between those two. So you can use it on the layer to override stack settings, and on the deployment to override stack or layer settings.

Using custom JSON gives you a way to tweak a setting for all instances in a given layer without having to affect the entire stack, and without having to provide custom JSON for every deployment.

Using Custom JSON on AWS OpsWorks Layers

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx2064HZ903DH8O/Using-Custom-JSON-on-AWS-OpsWorks-Layers

Custom JSON, which has always been available on AWS OpsWorks stacks and deployments, is now also available as a property on layers in stacks using Chef versions 11.10, 12, and 12.2.

In this post I show how you can use custom JSON to adapt a single Chef cookbook to support different use cases on individual layers. To demonstrate, I use the example of a MongoDB setup with multiple shards.

In OpsWorks, each instance belongs to one or more layers, which in turn make up a stack. You use layers to specify details about which Chef cookbooks are run when the instances are set up and configured, among other things. When your stacks have instances that serve different purposes, you use different cookbooks for each.

Sometimes, however, there are only small differences between the layers and they don’t justify using separate cookbooks. For example, when you have a large MongoDB installation with multiple shards, you would have a layer per shard, as shown in the following figure, but your cookbooks wouldn’t necessarily differ.

custom-json-per-layer-1.png

Let’s assume I’m using the community cookbook for MongoDB. I would configure this cookbook using attributes. The attribute for setting the shard name would be node[:mongodb][:shard_name]. But let’s say that I want to set a certain attribute for any deployment to any instance in a given layer. I would use custom JSON to set the that attribute.

When declared on a stack, custom JSON always applies to all instances, no matter which layer they’re in. Custom JSON declared on a deployment is helpful for one-off deployments with special settings; but, the provided custom JSON doesn’t stick to the instances you deploy to, so a subsequent deployment doesn’t know about custom JSON you might have specified in an earlier deployment.

Custom JSON declared on the layer applies to each instance that belongs to that layer. Like custom JSON declared on the stack, it’s permanently stored and applied to all subsequent deployments. So you just need to edit each layer and set the right shard, as shown in the following figure:

custom-json-per-layer-2.png

During a Chef run, OpsWorks makes custom JSON contents available as attributes. That way the settings are available in the MongoDB cookbook and configure the MongoDB server accordingly. For details about using custom JSON content as an attribute, see our documentation.

Custom JSON declared on the deployment overrides custom JSON declared on the stack. Custom JSON declared on the layer sits in between those two. So you can use it on the layer to override stack settings, and on the deployment to override stack or layer settings.

Using custom JSON gives you a way to tweak a setting for all instances in a given layer without having to affect the entire stack, and without having to provide custom JSON for every deployment.

AWS OpsWorks Now Supports Chef 12 for Linux

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx1T5HNA1TSU8NH/AWS-OpsWorks-Now-Supports-Chef-12-for-Linux

Update: In the meantime our friends at Chef published a post that walks you through deploying a Django app on AWS OpsWorks using Chef 12. Go check it out!

In addition to providing Chef 12 support for Windows, AWS OpsWorks (OpsWorks) now supports Chef 12 for Linux operating systems. This release benefits users who want to take advantage of the large selection of community cookbooks or want to build and customize their own cookbooks.

You can use the latest release of Chef 12 to support Linux-based stacks currently running Chef Client 12.5.1 (For those of you concerned about future Chef Client upgrades, be assured that new versions of the Chef 12.x Client will be made available shortly after their public release). OpsWorks now also prevents cookbook namespace conflicts by using two separate Chef runs (OpsWorks’s Chef run and yours run independently).

Use Chef Supermarket Cookbooks

Because this release focuses on providing you with full control and flexibility when using your own cookbooks, built-in layers and cookbooks will no longer be available for Chef 12 (PHP, Rails, Node.JS, MySQL, etc.,). Instead, Chef 12 users can use OpsWorks to leverage up-to-date community cookbooks to support the creation of custom layers. A Chef 12 Node.js sample stack (on Windows and Linux) is now available in the OpsWorks console. We’ll provide additional examples in the future.

"With the availability of the Chef 12 Linux client, AWS OpsWorks customers can now leverage shared Chef Supermarket cookbooks for both Windows and Linux workloads. This means our joint customers can maximize the full potential of the vibrant open source Chef Community across the entire stack."

– Ken Cheney, Vice President of Business Development, Chef

Chef 11.10 and earlier versions for Linux will continue to support built-in layers. The built-in cookbooks will continue to be available at https://github.com/aws/opsworks-cookbooks/tree/release-chef-11.10.

Beginning in January 2016, you will no longer be able to create Chef 11.4 stacks using the OpsWorks console. Existing Chef 11.4 stacks will continue to operate normally, and you will continue to be able to create stacks with Chef 11.4 by using the API.

Use Chef Search

With Chef 12 Linux, you can use Chef search, which is the native Chef way to obtain information about stacks, layers, instances, and stack resources, such as Elastic Load Balancing load balancers and RDS DB instances. The following examples show how to use Chef search to get information and to perform common tasks. A complete reference of available search indices is available in our documentation.

Use Chef search to retrieve the stack’s state:

search(:node, “name:web1”)
search(:node, “name:web*”)

Map OpsWorks layers as Chef roles:

appserver = search(:node, "role:my-app").first
Chef::Log.info(”Private IP: #{appserver[:private_ip]}")

Use Chef search to retrieve hostnames, IP addresses, instance types, Amazon Machine Images (AMIs), Availability Zones (AZs), and more:

search(:aws_opsworks_app, "name:myapp")
search(:aws_opsworks_app, ”deploy:true")
search(:aws_opsworks_layer, "name:my_layer*")
search(:aws_opsworks_rds_db_instance)
search(:aws_opsworks_volume)
search(:aws_opsworks_ecs_cluster)
search(:aws_opsworks_elastic_load_balancer)
search(:aws_opsworks_user)

Use Chef search for ad-hoc resource discovery, for example, to find the database connection information for your applications or to discover all available app server instances when configuring a load-balancer.

Explore a Chef 12 Linux or Chef 12.2 Windows Stack

To explore a Chef 12 Linux or Chef 12.2 Windows stack, simply select the “Sample stack” option in the OpsWorks console:

To create a Chef 12 stack based on your Chef cookbooks, choose Linux as the Default operating system:

Use any Chef 12 open source community cookbook from any source, or create your own cookbooks. OpsWorks’s built-in operational tools continue to empower you to manage your day-to-day operations.

AWS OpsWorks Now Supports Chef 12 for Linux

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx1T5HNA1TSU8NH/AWS-OpsWorks-Now-Supports-Chef-12-for-Linux

Update: In the meantime our friends at Chef published a post that walks you through deploying a Django app on AWS OpsWorks using Chef 12. Go check it out!

In addition to providing Chef 12 support for Windows, AWS OpsWorks (OpsWorks) now supports Chef 12 for Linux operating systems. This release benefits users who want to take advantage of the large selection of community cookbooks or want to build and customize their own cookbooks.

You can use the latest release of Chef 12 to support Linux-based stacks currently running Chef Client 12.5.1 (For those of you concerned about future Chef Client upgrades, be assured that new versions of the Chef 12.x Client will be made available shortly after their public release). OpsWorks now also prevents cookbook namespace conflicts by using two separate Chef runs (OpsWorks’s Chef run and yours run independently).

Use Chef Supermarket Cookbooks

Because this release focuses on providing you with full control and flexibility when using your own cookbooks, built-in layers and cookbooks will no longer be available for Chef 12 (PHP, Rails, Node.JS, MySQL, etc.,). Instead, Chef 12 users can use OpsWorks to leverage up-to-date community cookbooks to support the creation of custom layers. A Chef 12 Node.js sample stack (on Windows and Linux) is now available in the OpsWorks console. We’ll provide additional examples in the future.

"With the availability of the Chef 12 Linux client, AWS OpsWorks customers can now leverage shared Chef Supermarket cookbooks for both Windows and Linux workloads. This means our joint customers can maximize the full potential of the vibrant open source Chef Community across the entire stack."

– Ken Cheney, Vice President of Business Development, Chef

Chef 11.10 and earlier versions for Linux will continue to support built-in layers. The built-in cookbooks will continue to be available at https://github.com/aws/opsworks-cookbooks/tree/release-chef-11.10.

Beginning in January 2016, you will no longer be able to create Chef 11.4 stacks using the OpsWorks console. Existing Chef 11.4 stacks will continue to operate normally, and you will continue to be able to create stacks with Chef 11.4 by using the API.

Use Chef Search

With Chef 12 Linux, you can use Chef search, which is the native Chef way to obtain information about stacks, layers, instances, and stack resources, such as Elastic Load Balancing load balancers and RDS DB instances. The following examples show how to use Chef search to get information and to perform common tasks. A complete reference of available search indices is available in our documentation.

Use Chef search to retrieve the stack’s state:

search(:node, “name:web1”)
search(:node, “name:web*”)

Map OpsWorks layers as Chef roles:

appserver = search(:node, "role:my-app").first
Chef::Log.info(”Private IP: #{appserver[:private_ip]}")

Use Chef search to retrieve hostnames, IP addresses, instance types, Amazon Machine Images (AMIs), Availability Zones (AZs), and more:

search(:aws_opsworks_app, "name:myapp")
search(:aws_opsworks_app, ”deploy:true")
search(:aws_opsworks_layer, "name:my_layer*")
search(:aws_opsworks_rds_db_instance)
search(:aws_opsworks_volume)
search(:aws_opsworks_ecs_cluster)
search(:aws_opsworks_elastic_load_balancer)
search(:aws_opsworks_user)

Use Chef search for ad-hoc resource discovery, for example, to find the database connection information for your applications or to discover all available app server instances when configuring a load-balancer.

Explore a Chef 12 Linux or Chef 12.2 Windows Stack

To explore a Chef 12 Linux or Chef 12.2 Windows stack, simply select the “Sample stack” option in the OpsWorks console:

To create a Chef 12 stack based on your Chef cookbooks, choose Linux as the Default operating system:

Use any Chef 12 open source community cookbook from any source, or create your own cookbooks. OpsWorks’s built-in operational tools continue to empower you to manage your day-to-day operations.

Now Available: Videos and Slide Decks from AWS OpsWorks Breakout Sessions at re:Invent 2015

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx21OKOEQC6TDM9/Now-Available-Videos-and-Slide-Decks-from-AWS-OpsWorks-Breakout-Sessions-at-re-I

Want to review an AWS OpsWorks session you attended at this year’s re:Invent or experience OpsWorks for the first time? Check out these re:Invent videos and slide decks.

DVO301 – AWS OpsWorks Under the Hood

This session is a deep dive into the OpsWorks lifecycle event framework and Chef 12 integration. It also explains OpsWorks support for Windows and Amazon EC2 Container Service integration.

Video: https://www.youtube.com/watch?v=WxSu015Zgak

Slide deck: http://www.slideshare.net/AmazonWebServices/dvo301-aws-opsworks-under-the-hood

DVO310 – Benefit from DevOps When Moving to AWS for Windows

This session explains how to push and operate code on EC2 Windows instances using services like AWS Elastic Beanstalk, OpsWorks, and Chef.

Video: https://www.youtube.com/watch?v=S0VjKvJHi6s

Slide deck: http://www.slideshare.net/AmazonWebServices/dvo310-benefit-from-devops-when-moving-to-aws-for-windows

We enjoyed meeting with you and collecting so much valuable feedback. Please keep it coming by posting on the forum!