Tag Archives: devops

Continuous Delivery of Nested AWS CloudFormation Stacks Using AWS CodePipeline

Post Syndicated from Prakash Palanisamy original https://aws.amazon.com/blogs/devops/continuous-delivery-of-nested-aws-cloudformation-stacks-using-aws-codepipeline/

In CodePipeline Update – Build Continuous Delivery Workflows for CloudFormation Stacks, Jeff Barr discusses infrastructure as code and how to use AWS CodePipeline for continuous delivery. In this blog post, I discuss the continuous delivery of nested CloudFormation stacks using AWS CodePipeline, with AWS CodeCommit as the source repository and AWS CodeBuild as a build and testing tool. I deploy the stacks using CloudFormation change sets following a manual approval process.

Here’s how to do it:

In AWS CodePipeline, create a pipeline with four stages:

  • Source (AWS CodeCommit)
  • Build and Test (AWS CodeBuild and AWS CloudFormation)
  • Staging (AWS CloudFormation and manual approval)
  • Production (AWS CloudFormation and manual approval)

Pipeline stages, the actions in each stage, and transitions between stages are shown in the following diagram.

CloudFormation templates, test scripts, and the build specification are stored in AWS CodeCommit repositories. These files are used in the Source stage of the pipeline in AWS CodePipeline.

The AWS::CloudFormation::Stack resource type is used to create child stacks from a master stack. The CloudFormation stack resource requires the templates of the child stacks to be stored in the S3 bucket. The location of the template file is provided as a URL in the properties section of the resource definition.

The following template creates three child stacks:

  • Security (IAM, security groups).
  • Database (an RDS instance).
  • Web stacks (EC2 instances in an Auto Scaling group, elastic load balancer).
Description: Master stack which creates all required nested stacks

Parameters:
  TemplatePath:
    Type: String
    Description: S3Bucket Path where the templates are stored
  VPCID:
    Type: "AWS::EC2::VPC::Id"
    Description: Enter a valid VPC Id
  PrivateSubnet1:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of private subnet in AZ1
  PrivateSubnet2:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of private subnet in AZ2
  PublicSubnet1:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of public subnet in AZ1
  PublicSubnet2:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of public subnet in AZ2
  S3BucketName:
    Type: String
    Description: Name of the S3 bucket to allow access to the Web Server IAM Role.
  KeyPair:
    Type: "AWS::EC2::KeyPair::KeyName"
    Description: Enter a valid KeyPair Name
  AMIId:
    Type: "AWS::EC2::Image::Id"
    Description: Enter a valid AMI ID to launch the instance
  WebInstanceType:
    Type: String
    Description: Enter one of the possible instance type for web server
    AllowedValues:
      - t2.large
      - m4.large
      - m4.xlarge
      - c4.large
  WebMinSize:
    Type: String
    Description: Minimum number of instances in auto scaling group
  WebMaxSize:
    Type: String
    Description: Maximum number of instances in auto scaling group
  DBSubnetGroup:
    Type: String
    Description: Enter a valid DB Subnet Group
  DBUsername:
    Type: String
    Description: Enter a valid Database master username
    MinLength: 1
    MaxLength: 16
    AllowedPattern: "[a-zA-Z][a-zA-Z0-9]*"
  DBPassword:
    Type: String
    Description: Enter a valid Database master password
    NoEcho: true
    MinLength: 1
    MaxLength: 41
    AllowedPattern: "[a-zA-Z0-9]*"
  DBInstanceType:
    Type: String
    Description: Enter one of the possible instance type for database
    AllowedValues:
      - db.t2.micro
      - db.t2.small
      - db.t2.medium
      - db.t2.large
  Environment:
    Type: String
    Description: Select the appropriate environment
    AllowedValues:
      - dev
      - test
      - uat
      - prod

Resources:
  SecurityStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/security-stack.yml"
      Parameters:
        S3BucketName:
          Ref: S3BucketName
        VPCID:
          Ref: VPCID
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value: SecurityStack

  DatabaseStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/database-stack.yml"
      Parameters:
        DBSubnetGroup:
          Ref: DBSubnetGroup
        DBUsername:
          Ref: DBUsername
        DBPassword:
          Ref: DBPassword
        DBServerSecurityGroup:
          Fn::GetAtt: SecurityStack.Outputs.DBServerSG
        DBInstanceType:
          Ref: DBInstanceType
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value:   DatabaseStack

  ServerStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/server-stack.yml"
      Parameters:
        VPCID:
          Ref: VPCID
        PrivateSubnet1:
          Ref: PrivateSubnet1
        PrivateSubnet2:
          Ref: PrivateSubnet2
        PublicSubnet1:
          Ref: PublicSubnet1
        PublicSubnet2:
          Ref: PublicSubnet2
        KeyPair:
          Ref: KeyPair
        AMIId:
          Ref: AMIId
        WebSG:
          Fn::GetAtt: SecurityStack.Outputs.WebSG
        ELBSG:
          Fn::GetAtt: SecurityStack.Outputs.ELBSG
        DBClientSG:
          Fn::GetAtt: SecurityStack.Outputs.DBClientSG
        WebIAMProfile:
          Fn::GetAtt: SecurityStack.Outputs.WebIAMProfile
        WebInstanceType:
          Ref: WebInstanceType
        WebMinSize:
          Ref: WebMinSize
        WebMaxSize:
          Ref: WebMaxSize
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value: ServerStack

Outputs:
  WebELBURL:
    Description: "URL endpoint of web ELB"
    Value:
      Fn::GetAtt: ServerStack.Outputs.WebELBURL

During the Validate stage, AWS CodeBuild checks for changes to the AWS CodeCommit source repositories. It uses the ValidateTemplate API to validate the CloudFormation template and copies the child templates and configuration files to the appropriate location in the S3 bucket.

The following AWS CodeBuild build specification validates the CloudFormation templates listed under the TEMPLATE_FILES environment variable and copies them to the S3 bucket specified in the TEMPLATE_BUCKET environment variable in the AWS CodeBuild project. Optionally, you can use the TEMPLATE_PREFIX environment variable to specify a path inside the bucket. This updates the configuration files to use the location of the child template files. The location of the template files is provided as a parameter to the master stack.

version: 0.1

environment_variables:
  plaintext:
    CHILD_TEMPLATES: |
      security-stack.yml
      server-stack.yml
      database-stack.yml
    TEMPLATE_FILES: |
      master-stack.yml
      security-stack.yml
      server-stack.yml
      database-stack.yml
    CONFIG_FILES: |
      config-prod.json
      config-test.json
      config-uat.json

phases:
  install:
    commands:
      npm install jsonlint -g
  pre_build:
    commands:
      - echo "Validating CFN templates"
      - |
        for cfn_template in $TEMPLATE_FILES; do
          echo "Validating CloudFormation template file $cfn_template"
          aws cloudformation validate-template --template-body file://$cfn_template
        done
      - |
        for conf in $CONFIG_FILES; do
          echo "Validating CFN parameters config file $conf"
          jsonlint -q $conf
        done
  build:
    commands:
      - echo "Copying child stack templates to S3"
      - |
        for child_template in $CHILD_TEMPLATES; do
          if [ "X$TEMPLATE_PREFIX" = "X" ]; then
            aws s3 cp "$child_template" "s3://$TEMPLATE_BUCKET/$child_template"
          else
            aws s3 cp "$child_template" "s3://$TEMPLATE_BUCKET/$TEMPLATE_PREFIX/$child_template"
          fi
        done
      - echo "Updating template configurtion files to use the appropriate values"
      - |
        for conf in $CONFIG_FILES; do
          if [ "X$TEMPLATE_PREFIX" = "X" ]; then
            echo "Replacing \"TEMPLATE_PATH_PLACEHOLDER\" for \"$TEMPLATE_BUCKET\" in $conf"
            sed -i -e "s/TEMPLATE_PATH_PLACEHOLDER/$TEMPLATE_BUCKET/" $conf
          else
            echo "Replacing \"TEMPLATE_PATH_PLACEHOLDER\" for \"$TEMPLATE_BUCKET/$TEMPLATE_PREFIX\" in $conf"
            sed -i -e "s/TEMPLATE_PATH_PLACEHOLDER/$TEMPLATE_BUCKET\/$TEMPLATE_PREFIX/" $conf
          fi
        done

artifacts:
  files:
    - master-stack.yml
    - config-*.json

After the template files are copied to S3, CloudFormation creates a test stack and triggers AWS CodeBuild as a test action.

Then the AWS CodeBuild build specification executes validate-env.py, the Python script used to determine whether resources created using the nested CloudFormation stacks conform to the specifications provided in the CONFIG_FILE.

version: 0.1

environment_variables:
  plaintext:
    CONFIG_FILE: env-details.yml

phases:
  install:
    commands:
      - pip install --upgrade pip
      - pip install boto3 --upgrade
      - pip install pyyaml --upgrade
      - pip install yamllint --upgrade
  pre_build:
    commands:
      - echo "Validating config file $CONFIG_FILE"
      - yamllint $CONFIG_FILE
  build:
    commands:
      - echo "Validating resources..."
      - python validate-env.py
      - exit $?

Upon successful completion of the test action, CloudFormation deletes the test stack and proceeds to the UAT stage in the pipeline.

During this stage, CloudFormation creates a change set against the UAT stack and then executes the change set. This updates the UAT environment and makes it available for acceptance testing. The process continues to a manual approval action. After the QA team validates the UAT environment and provides an approval, the process moves to the Production stage in the pipeline.

During this stage, CloudFormation creates a change set for the nested production stack and the process continues to a manual approval step. Upon approval (usually by a designated executive), the change set is executed and the production deployment is completed.
 

Setting up a continuous delivery pipeline

 
I used a CloudFormation template to set up my continuous delivery pipeline. The codepipeline-cfn-codebuild.yml template, available from GitHub, sets up a full-featured pipeline.

When I use the template to create my pipeline, I specify the following:

  • AWS CodeCommit repositories.
  • SNS topics to send approval notifications.
  • S3 bucket name where the artifacts will be stored.

The CFNTemplateRepoName points to the AWS CodeCommit repository where CloudFormation templates, configuration files, and build specification files are stored.

My repo contains following files:

The continuous delivery pipeline is ready just seconds after clicking Create Stack. After it’s created, the pipeline executes each stage. Upon manual approvals for the UAT and Production stages, the pipeline successfully enables continuous delivery.


 

Implementing a change in nested stack

 
To make changes to a child stack in a nested stack (for example, to update a parameter value or add or change resources), update the master stack. The changes must be made in the appropriate template or configuration files and then checked in to the AWS CodeCommit repository. This triggers the following deployment process:

 

Conclusion

 
In this post, I showed how you can use AWS CodePipeline, AWS CloudFormation, AWS CodeBuild, and a manual approval process to create a continuous delivery pipeline for both infrastructure as code and application deployment.

For more information about AWS CodePipeline, see the AWS CodePipeline documentation. You can get started in just a few clicks. All CloudFormation templates, AWS CodeBuild build specification files, and the Python script that performs the validation are available in codepipeline-nested-cfn GitHub repository.


About the author

 
Prakash Palanisamy is a Solutions Architect for Amazon Web Services. When he is not working on Serverless, DevOps or Alexa, he will be solving problems in Project Euler. He also enjoys watching educational documentaries.

How to Create an AMI Builder with AWS CodeBuild and HashiCorp Packer – Part 2

Post Syndicated from Heitor Lessa original https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer-part-2/

Written by AWS Solutions Architects Jason Barto and Heitor Lessa

 
In Part 1 of this post, we described how AWS CodeBuild, AWS CodeCommit, and HashiCorp Packer can be used to build an Amazon Machine Image (AMI) from the latest version of Amazon Linux. In this post, we show how to use AWS CodePipeline, AWS CloudFormation, and Amazon CloudWatch Events to continuously ship new AMIs. We use Ansible by Red Hat to harden the OS on the AMIs through a well-known set of security controls outlined by the Center for Internet Security in its CIS Amazon Linux Benchmark.

You’ll find the source code for this post in our GitHub repo.

At the end of this post, we will have the following architecture:

Requirements

 
To follow along, you will need Git and a text editor. Make sure Git is configured to work with AWS CodeCommit, as described in Part 1.

Technologies

 
In addition to the services and products used in Part 1 of this post, we also use these AWS services and third-party software:

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Amazon CloudWatch Events enables you to react selectively to events in the cloud and in your applications. Specifically, you can create CloudWatch Events rules that match event patterns, and take actions in response to those patterns.

AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. AWS CodePipeline builds, tests, and deploys your code every time there is a code change, based on release process models you define.

Amazon SNS is a fast, flexible, fully managed push notification service that lets you send individual messages or to fan out messages to large numbers of recipients. Amazon SNS makes it simple and cost-effective to send push notifications to mobile device users or email recipients. The service can even send messages to other distributed services.

Ansible is a simple IT automation system that handles configuration management, application deployment, cloud provisioning, ad-hoc task-execution, and multinode orchestration.

Getting Started

 
We use CloudFormation to bootstrap the following infrastructure:

Component Purpose
AWS CodeCommit repository Git repository where the AMI builder code is stored.
S3 bucket Build artifact repository used by AWS CodePipeline and AWS CodeBuild.
AWS CodeBuild project Executes the AWS CodeBuild instructions contained in the build specification file.
AWS CodePipeline pipeline Orchestrates the AMI build process, triggered by new changes in the AWS CodeCommit repository.
SNS topic Notifies subscribed email addresses when an AMI build is complete.
CloudWatch Events rule Defines how the AMI builder should send a custom event to notify an SNS topic.
Region AMI Builder Launch Template
N. Virginia (us-east-1)
Ireland (eu-west-1)

After launching the CloudFormation template linked here, we will have a pipeline in the AWS CodePipeline console. (Failed at this stage simply means we don’t have any data in our newly created AWS CodeCommit Git repository.)

Next, we will clone the newly created AWS CodeCommit repository.

If this is your first time connecting to a AWS CodeCommit repository, please see instructions in our documentation on Setup steps for HTTPS Connections to AWS CodeCommit Repositories.

To clone the AWS CodeCommit repository (console)

  1. From the AWS Management Console, open the AWS CloudFormation console.
  2. Choose the AMI-Builder-Blogpost stack, and then choose Output.
  3. Make a note of the Git repository URL.
  4. Use git to clone the repository.

For example: git clone https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/AMI-Builder_repo

To clone the AWS CodeCommit repository (CLI)

# Retrieve CodeCommit repo URL
git_repo=$(aws cloudformation describe-stacks --query 'Stacks[0].Outputs[?OutputKey==`GitRepository`].OutputValue' --output text --stack-name "AMI-Builder-Blogpost")

# Clone repository locally
git clone ${git_repo}

Bootstrap the Repo with the AMI Builder Structure

 
Now that our infrastructure is ready, download all the files and templates required to build the AMI.

Your local Git repo should have the following structure:

.
├── ami_builder_event.json
├── ansible
├── buildspec.yml
├── cloudformation
├── packer_cis.json

Next, push these changes to AWS CodeCommit, and then let AWS CodePipeline orchestrate the creation of the AMI:

git add .
git commit -m "My first AMI"
git push origin master

AWS CodeBuild Implementation Details

 
While we wait for the AMI to be created, let’s see what’s changed in our AWS CodeBuild buildspec.yml file:

...
phases:
  ...
  build:
    commands:
      ...
      - ./packer build -color=false packer_cis.json | tee build.log
  post_build:
    commands:
      - egrep "${AWS_REGION}\:\sami\-" build.log | cut -d' ' -f2 > ami_id.txt
      # Packer doesn't return non-zero status; we must do that if Packer build failed
      - test -s ami_id.txt || exit 1
      - sed -i.bak "s/<<AMI-ID>>/$(cat ami_id.txt)/g" ami_builder_event.json
      - aws events put-events --entries file://ami_builder_event.json
      ...
artifacts:
  files:
    - ami_builder_event.json
    - build.log
  discard-paths: yes

In the build phase, we capture Packer output into a file named build.log. In the post_build phase, we take the following actions:

  1. Look up the AMI ID created by Packer and save its findings to a temporary file (ami_id.txt).
  2. Forcefully make AWS CodeBuild to fail if the AMI ID (ami_id.txt) is not found. This is required because Packer doesn’t fail if something goes wrong during the AMI creation process. We have to tell AWS CodeBuild to stop by informing it that an error occurred.
  3. If an AMI ID is found, we update the ami_builder_event.json file and then notify CloudWatch Events that the AMI creation process is complete.
  4. CloudWatch Events publishes a message to an SNS topic. Anyone subscribed to the topic will be notified in email that an AMI has been created.

Lastly, the new artifacts phase instructs AWS CodeBuild to upload files built during the build process (ami_builder_event.json and build.log) to the S3 bucket specified in the Outputs section of the CloudFormation template. These artifacts can then be used as an input artifact in any later stage in AWS CodePipeline.

For information about customizing the artifacts sequence of the buildspec.yml, see the Build Specification Reference for AWS CodeBuild.

CloudWatch Events Implementation Details

 
CloudWatch Events allow you to extend the AMI builder to not only send email after the AMI has been created, but to hook up any of the supported targets to react to the AMI builder event. This event publication means you can decouple from Packer actions you might take after AMI completion and plug in other actions, as you see fit.

For more information about targets in CloudWatch Events, see the CloudWatch Events API Reference.

In this case, CloudWatch Events should receive the following event, match it with a rule we created through CloudFormation, and publish a message to SNS so that you can receive an email.

Example CloudWatch custom event

[
        {
            "Source": "com.ami.builder",
            "DetailType": "AmiBuilder",
            "Detail": "{ \"AmiStatus\": \"Created\"}",
            "Resources": [ "ami-12cd5guf" ]
        }
]

Cloudwatch Events rule

{
  "detail-type": [
    "AmiBuilder"
  ],
  "source": [
    "com.ami.builder"
  ],
  "detail": {
    "AmiStatus": [
      "Created"
    ]
  }
}

Example SNS message sent in email

{
    "version": "0",
    "id": "f8bdede0-b9d7...",
    "detail-type": "AmiBuilder",
    "source": "com.ami.builder",
    "account": "<<aws_account_number>>",
    "time": "2017-04-28T17:56:40Z",
    "region": "eu-west-1",
    "resources": ["ami-112cd5guf "],
    "detail": {
        "AmiStatus": "Created"
    }
}

Packer Implementation Details

 
In addition to the build specification file, there are differences between the current version of the HashiCorp Packer template (packer_cis.json) and the one used in Part 1.

Variables

  "variables": {
    "vpc": "{{env `BUILD_VPC_ID`}}",
    "subnet": "{{env `BUILD_SUBNET_ID`}}",
         “ami_name”: “Prod-CIS-Latest-AMZN-{{isotime \”02-Jan-06 03_04_05\”}}”
  },
  • ami_name: Prefixes a name used by Packer to tag resources during the Builders sequence.
  • vpc and subnet: Environment variables defined by the CloudFormation stack parameters.

We no longer assume a default VPC is present and instead use the VPC and subnet specified in the CloudFormation parameters. CloudFormation configures the AWS CodeBuild project to use these values as environment variables. They are made available throughout the build process.

That allows for more flexibility should you need to change which VPC and subnet will be used by Packer to launch temporary resources.

Builders

  "builders": [{
    ...
    "ami_name": “{{user `ami_name`| clean_ami_name}}”,
    "tags": {
      "Name": “{{user `ami_name`}}”,
    },
    "run_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "run_volume_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "snapshot_tags": {
      "Name": “{{user `ami_name`}}",
    },
    ...
    "vpc_id": "{{user `vpc` }}",
    "subnet_id": "{{user `subnet` }}"
  }],

We now have new properties (*_tag) and a new function (clean_ami_name) and launch temporary resources in a VPC and subnet specified in the environment variables. AMI names can only contain a certain set of ASCII characters. If the input in project deviates from the expected characters (for example, includes whitespace or slashes), Packer’s clean_ami_name function will fix it.

For more information, see functions on the HashiCorp Packer website.

Provisioners

  "provisioners": [
    {
        "type": "shell",
        "inline": [
            "sudo pip install ansible"
        ]
    }, 
    {
        "type": "ansible-local",
        "playbook_file": "ansible/playbook.yaml",
        "role_paths": [
            "ansible/roles/common"
        ],
        "playbook_dir": "ansible",
        "galaxy_file": "ansible/requirements.yaml"
    },
    {
      "type": "shell",
      "inline": [
        "rm .ssh/authorized_keys ; sudo rm /root/.ssh/authorized_keys"
      ]
    }

We used shell provisioner to apply OS patches in Part 1. Now, we use shell to install Ansible on the target machine and ansible-local to import, install, and execute Ansible roles to make our target machine conform to our standards.

Packer uses shell to remove temporary keys before it creates an AMI from the target and temporary EC2 instance.

Ansible Implementation Details

 
Ansible provides OS patching through a custom Common role that can be easily customized for other tasks.

CIS Benchmark and Cloudwatch Logs are implemented through two Ansible third-party roles that are defined in ansible/requirements.yaml as seen in the Packer template.

The Ansible provisioner uses Ansible Galaxy to download these roles onto the target machine and execute them as instructed by ansible/playbook.yaml.

For information about how these components are organized, see the Playbook Roles and Include Statements in the Ansible documentation.

The following Ansible playbook (ansible</playbook.yaml) controls the execution order and custom properties:

---
- hosts: localhost
  connection: local
  gather_facts: true    # gather OS info that is made available for tasks/roles
  become: yes           # majority of CIS tasks require root
  vars:
    # CIS Controls whitepaper:  http://bit.ly/2mGAmUc
    # AWS CIS Whitepaper:       http://bit.ly/2m2Ovrh
    cis_level_1_exclusions:
    # 3.4.2 and 3.4.3 effectively blocks access to all ports to the machine
    ## This can break automation; ignoring it as there are stronger mechanisms than that
      - 3.4.2 
      - 3.4.3
    # CloudWatch Logs will be used instead of Rsyslog/Syslog-ng
    ## Same would be true if any other software doesn't support Rsyslog/Syslog-ng mechanisms
      - 4.2.1.4
      - 4.2.2.4
      - 4.2.2.5
    # Autofs is not installed in newer versions, let's ignore
      - 1.1.19
    # Cloudwatch Logs role configuration
    logs:
      - file: /var/log/messages
        group_name: "system_logs"
  roles:
    - common
    - anthcourtney.cis-amazon-linux
    - dharrisio.aws-cloudwatch-logs-agent

Both third-party Ansible roles can be easily configured through variables (vars). We use Ansible playbook variables to exclude CIS controls that don’t apply to our case and to instruct the CloudWatch Logs agent to stream the /var/log/messages log file to CloudWatch Logs.

If you need to add more OS or application logs, you can easily duplicate the playbook and make changes. The CloudWatch Logs agent will ship configured log messages to CloudWatch Logs.

For more information about parameters you can use to further customize third-party roles, download Ansible roles for the Cloudwatch Logs Agent and CIS Amazon Linux from the Galaxy website.

Committing Changes

 
Now that Ansible and CloudWatch Events are configured as a part of the build process, commiting any changes to the AWS CodeComit Git Repository will triger a new AMI build process that can be followed through the AWS CodePipeline console.

When the build is complete, an email will be sent to the email address you provided as a part of the CloudFormation stack deployment. The email serves as notification that an AMI has been built and is ready for use.

Summary

 
We used AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Packer, and Ansible to build a pipeline that continuously builds new, hardened CIS AMIs. We used Amazon SNS so that email addresses subscribed to a SNS topic are notified upon completion of the AMI build.

By treating our AMI creation process as code, we can iterate and track changes over time. In this way, it’s no different from a software development workflow. With that in mind, software patches, OS configuration, and logs that need to be shipped to a central location are only a git commit away.

Next Steps

 
Here are some ideas to extend this AMI builder:

  • Hook up a Lambda function in Cloudwatch Events to update EC2 Auto Scaling configuration upon completion of the AMI build.
  • Use AWS CodePipeline parallel steps to build multiple Packer images.
  • Add a commit ID as a tag for the AMI you created.
  • Create a scheduled Lambda function through Cloudwatch Events to clean up old AMIs based on timestamp (name or additional tag).
  • Implement Windows support for the AMI builder.
  • Create a cross-account or cross-region AMI build.

Cloudwatch Events allow the AMI builder to decouple AMI configuration and creation so that you can easily add your own logic using targets (AWS Lambda, Amazon SQS, Amazon SNS) to add events or recycle EC2 instances with the new AMI.

If you have questions or other feedback, feel free to leave it in the comments or contribute to the AMI Builder repo on GitHub.

DevOps Cafe Episode 72 – Kelsey Hightower

Post Syndicated from DevOpsCafeAdmin original http://devopscafe.org/show/2017/6/18/devops-cafe-episode-72-kelsey-hightower.html

You can’t contain(er) Kelsey.

John and Damon chat with Kelsey Hightower (Google) about the future of operations, kubernetes, docker, containers, self-learning, and more!
  

  

Direct download

Follow John Willis on Twitter: @botchagalupe
Follow Damon Edwards on Twitter: @damonedwards 
Follow Kelsey Hightower on Twitter: @kelseyhightower

Notes:

 

Please tweet or leave comments or questions below and we’ll read them on the show!

AWS Online Tech Talks – June 2017

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2017/

As the sixth month of the year, June is significant in that it is not only my birth month (very special), but it contains the summer solstice in the Northern Hemisphere, the day with the most daylight hours, and the winter solstice in the Southern Hemisphere, the day with the fewest daylight hours. In the United States, June is also the month in which we celebrate our dads with Father’s Day and have month-long celebrations of music, heritage, and the great outdoors.

Therefore, the month of June can be filled with lots of excitement. So why not add even more delight to the month, by enhancing your cloud computing skills. This month’s AWS Online Tech Talks features sessions on Artificial Intelligence (AI), Storage, Big Data, and Compute among other great topics.

June 2017 – Schedule

Noted below are the upcoming scheduled live, online technical sessions being held during the month of June. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts. All schedule times for the online tech talks are shown in the Pacific Time (PDT) time zone.

Webinars featured this month are:

Thursday, June 1

Storage

9:00 AM – 10:00 AM: Deep Dive on Amazon Elastic File System

Big Data

10:30 AM – 11:30 AM: Migrating Big Data Workloads to Amazon EMR

Serverless

12:00 Noon – 1:00 PM: Building AWS Lambda Applications with the AWS Serverless Application Model (AWS SAM)

 

Monday, June 5

Artificial Intelligence

9:00 AM – 9:40 AM: Exploring the Business Use Cases for Amazon Lex

 

Tuesday, June 6

Management Tools

9:00 AM – 9:40 AM: Automated Compliance and Governance with AWS Config and AWS CloudTrail

 

Wednesday, June 7

Storage

9:00 AM – 9:40 AM: Backing up Amazon EC2 with Amazon EBS Snapshots

Big Data

10:30 AM – 11:10 AM: Intro to Amazon Redshift Spectrum: Quickly Query Exabytes of Data in S3

DevOps

12:00 Noon – 12:40 PM: Introduction to AWS CodeStar: Quickly Develop, Build, and Deploy Applications on AWS

 

Thursday, June 8

Artificial Intelligence

9:00 AM – 9:40 AM: Exploring the Business Use Cases for Amazon Polly

10:30 AM – 11:10 AM: Exploring the Business Use Cases for Amazon Rekognition

 

Monday, June 12

Artificial Intelligence

9:00 AM – 9:40 AM: Exploring the Business Use Cases for Amazon Machine Learning

 

Tuesday, June 13

Compute

9:00 AM – 9:40 AM: DevOps with Visual Studio, .NET and AWS

IoT

10:30 AM – 11:10 AM: Create, with Intel, an IoT Gateway and Establish a Data Pipeline to AWS IoT

Big Data

12:00 Noon – 12:40 PM: Real-Time Log Analytics using Amazon Kinesis and Amazon Elasticsearch Service

 

Wednesday, June 14

Containers

9:00 AM – 9:40 AM: Batch Processing with Containers on AWS

Security & Identity

12:00 Noon – 12:40 PM: Using Microsoft Active Directory across On-premises and Cloud Workloads

 

Thursday, June 15

Big Data

12:00 Noon – 1:00 PM: Building Big Data Applications with Serverless Architectures

 

Monday, June 19

Artificial Intelligence

9:00 AM – 9:40 AM: Deep Learning for Data Scientists: Using Apache MxNet and R on AWS

 

Tuesday, June 20

Storage

9:00 AM – 9:40 AM: Cloud Backup & Recovery Options with AWS Partner Solutions

Artificial Intelligence

10:30 AM – 11:10 AM: An Overview of AI on the AWS Platform

 

The AWS Online Tech Talks series covers a broad range of topics at varying technical levels. These sessions feature live demonstrations & customer examples led by AWS engineers and Solution Architects. Check out the AWS YouTube channel for more on-demand webinars on AWS technologies.

Tara

DevOps Cafe Episode 71 – Courtney Kissler

Post Syndicated from DevOpsCafeAdmin original http://devopscafe.org/show/2017/5/25/devops-cafe-episode-71-courtney-kissler.html

Ordering Up Some Transformation

John and Damon pick Courtney Kissler’s brain on the techniques that enable her to be a hands-on technology leader with a track record for getting teams to find and fix what is getting in the way. 

 

 

 

  

Direct download

Follow John Willis on Twitter: @botchagalupe
Follow Damon Edwards on Twitter: @damonedwards 
Follow Courtney Kissler on Twitter: @ladyhock

Notes:

 

Please tweet or leave comments or questions below and we’ll read them on the show!

DevOps Cafe Episode 71

Post Syndicated from DevOpsCafeAdmin original http://devopscafe.org/show/2017/5/25/devops-cafe-episode-71.html

Ordering Up Some Transformation

John and Damon pick Courtney Kissler’s brain on the techniques that enable her to be a hands-on technology leader with a track record for getting teams to find and fix what is getting in the way. 

 

 

 

  

Direct download

Follow John Willis on Twitter: @botchagalupe
Follow Damon Edwards on Twitter: @damonedwards 
Follow Courtney Kissler on Twitter: @ladyhock

Notes:

 

Please tweet or leave comments or questions below and we’ll read them on the show!

AWS Online Tech Talks – May 2017

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-may-2017/

Spring has officially sprung. As you enjoy the blossoming of May flowers, it may be worthy to also note some of the great tech talks blossoming online during the month of May. This month’s AWS Online Tech Talks features sessions on topics like AI, DevOps, Data, and Serverless just to name a few.

May 2017 – Schedule

Below is the upcoming schedule for the live, online technical sessions scheduled for the month of May. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts. All schedule times for the online tech talks are shown in the Pacific Time (PDT) time zone.

Webinars featured this month are:

Monday, May 15

Artificial Intelligence

9:00 AM – 10:00 AM: Integrate Your Amazon Lex Chatbot with Any Messaging Service

 

Tuesday, May 16

Compute

10:30 AM – 11:30 AM: Deep Dive on Amazon EC2 F1 Instance

IoT

12:00 Noon – 1:00 PM: How to Connect Your Own Creations with AWS IoT

Wednesday, May 17

Management Tools

9:00 AM – 10:00 AM: OpsWorks for Chef Automate – Automation Made Easy!

Serverless

10:30 AM – 11:30 AM: Serverless Orchestration with AWS Step Functions

Enterprise & Hybrid

12:00 Noon – 1:00 PM: Moving to the AWS Cloud: An Overview of the AWS Cloud Adoption Framework

 

Thursday, May 18

Compute

9:00 AM – 10:00 AM: Scaling Up Tenfold with Amazon EC2 Spot Instances

Big Data

10:30 AM – 11:30 AM: Building Analytics Pipelines for Games on AWS

12:00 Noon – 1:00 PM: Serverless Big Data Analytics using Amazon Athena and Amazon QuickSight

 

Monday, May 22

Artificial Intelligence

9:00 AM – 10:00 AM: What’s New with Amazon Rekognition

Serverless

10:30 AM – 11:30 AM: Building Serverless Web Applications

 

Tuesday, May 23

Hands-On Lab

8:30 – 10:00 AM: Hands On Lab: Windows Workloads on AWS

Big Data

10:30 AM – 11:30 AM: Streaming ETL for Data Lakes using Amazon Kinesis Firehose

DevOps

12:00 Noon – 1:00 PM: Deep Dive: Continuous Delivery for AI Applications with ECS

 

Wednesday, May 24

Storage

9:00 – 10:00 AM: Moving Data into the Cloud with AWS Transfer Services

Containers

12:00 Noon – 1:00 PM: Building a CICD Pipeline for Container Deployment to Amazon ECS

 

Thursday, May 25

Mobile

9:00 – 10:00 AM: Test Your Android App with Espresso and AWS Device Farm

Security & Identity

10:30 AM – 11:30 AM: Advanced Techniques for Federation of the AWS Management Console and Command Line Interface (CLI)

 

Tuesday, May 30

Databases

9:00 – 10:00 AM: DynamoDB: Architectural Patterns and Best Practices for Infinitely Scalable Applications

Compute

10:30 AM – 11:30 AM: Deep Dive on Amazon EC2 Elastic GPUs

Security & Identity

12:00 Noon – 1:00 PM: Securing Your AWS Infrastructure with Edge Services

 

Wednesday, May 31

Hands-On Lab

8:30 – 10:00 AM: Hands On Lab: Introduction to Microsoft SQL Server in AWS

Enterprise & Hybrid

10:30 AM – 11:30 AM: Best Practices in Planning a Large-Scale Migration to AWS

Databases

12:00 Noon – 1:00 PM: Convert and Migrate Your NoSQL Database or Data Warehouse to AWS

 

The AWS Online Tech Talks series covers a broad range of topics at varying technical levels. These sessions feature live demonstrations & customer examples led by AWS engineers and Solution Architects. Check out the AWS YouTube channel for more on-demand webinars on AWS technologies.

Tara

Amazon Inspector Update – Assessment Reporting, Proxy Support, and More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-inspector-update-assessment-reporting-proxy-support-and-more/

Amazon Inspector is our automated security assessment service. It analyzes the behavior of the applications that you run in AWS and helps you to identify potential security issues. In late 2015 I introduced you to Inspector and showed you how to use it (Amazon Inspector – Automated Security Assessment Service). You start by using tags to define the collection of AWS resources that make up your application (also known as the assessment target). Then you create a security assessment template and specify the set of rules that you would like to run as part of the assessment:

After you create the assessment target and the security assessment template, you can run it against the target resources with a click. The assessment makes use of an agent that runs on your Linux and Windows-based EC2 instances (read about AWS Agents to learn more). You can process the assessments manually or you can forward the findings to your existing ticketing system using AWS Lambda (read Scale Your Security Vulnerability Testing with Amazon Inspector to see how to do this).

Whether you run one instance or thousands, we recommend that you run assessments on a regular and frequent basis. You can run them on your development and integration instances as part of your DevOps pipeline; this will give you confidence that the code and the systems that you deploy to production meet the conditions specified by the rule packages that you selected when you created the security assessment template. You should also run frequent assessments against production systems in order to guard against possible configuration drift.

We have recently added some powerful new features to Amazon Inspector:

  • Assessment Reports – The new assessment reports provide a comprehensive summary of the assessment, beginning with an executive summary. The reports are designed to be shared with teams and with leadership, while also serving as documentation for compliance audits.
  • Proxy Support – You can now configure the agent to run within proxy environments (many of our customers have been asking for this).
  • CloudWatch Metrics – Inspector now publishes metrics to Amazon CloudWatch so that you can track and observe changes over time.
  • Amazon Linux 2017.03 Support – This new version of the Amazon Linux AMI is launching today and Inspector supports it now.

Assessment Reports
After an assessment runs completes, you can download a detailed assessment report in HTML or PDF form:

The report begins with a cover page and executive summary:

Then it summarizes the assessment rules and the targets that were tested:

Then it summarizes the findings for each rules package:

Because the report is intended to serve as documentation for compliance audits, it includes detailed information about each finding, along with recommendations for remediation:

The full report also indicates which rules were checked and passed for all target instances:

Proxy Support
The Inspector agent can now communicate with Inspector through an HTTPS proxy. For Linux instances, we support HTTPS Proxy, and for Windows instances, we support WinHTTP proxy. See the Amazon Inspector User Guide for instructions to configure Proxy support for the AWS Agent.

CloudWatch Metrics
Amazon Inspector now publishes metrics to Amazon CloudWatch after each run. The metrics are categorized by target and by template. An aggregate metric, which indicates how many assessment runs have been performed in the AWS account, is also available. You can find the metrics in the CloudWatch console, as usual:

Here are the metrics that are published on a per-target basis:

And here are the per-template metrics:

Amazon Linux 2017.03 Support
Many AWS customers use the Amazon Linux AMI and automatically upgrade as new versions become available. In order to provide these customers with continuous coverage from Amazon Inspector, we are now making sure that this and future versions of the AMI are supported by Amazon Inspector on launch day.

Available Now
All of these features are available now and you can start using them today!

Pricing is based on a per-agent, per-assessment basis and starts at $0.30 per assessment, declining to as low at $0.05 per assessment when you run 45,000 or more assessments per month (see the Amazon Inspector Pricing page for more information).

Jeff;

Announcing the AWS Chatbot Challenge – Create Conversational, Intelligent Chatbots using Amazon Lex and AWS Lambda

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/announcing-the-aws-chatbot-challenge-create-conversational-intelligent-chatbots-using-amazon-lex-and-aws-lambda/

If you have been checking out the launches and announcements from the AWS 2017 San Francisco Summit, you may be aware that the Amazon Lex service is now Generally Available, and you can use the service today. Amazon Lex is a fully managed AI service that enables developers to build conversational interfaces into any application using voice and text. Lex uses the same deep learning technologies of Amazon Alexa-powered devices like Amazon Echo. With the release of Amazon Lex, developers can build highly engaging lifelike user experiences and natural language interactions within their own applications. Amazon Lex supports Slack, Facebook Messenger, and Twilio SMS enabling you to easily publish your voice or text chatbots using these popular chat services. There is no better time to try out the Amazon Lex service to add the gift of gab to your applications, and now you have a great reason to get started.

May I have a Drumroll please?

I am thrilled to announce the AWS Chatbot Challenge! The AWS Chatbot Challenge is your opportunity to build a unique chatbot that helps solves a problem or adds value for prospective users. The AWS Chatbot Challenge is brought to you by Amazon Web Services in partnership with Slack.

 

The Challenge

Your mission, if you choose to accept it is to build a conversational, natural language chatbot using Amazon Lex and leverage Lex’s integration with AWS Lambda to execute logic or data processing on the backend. Your submission can be a new or existing bot, however, if your bot is an existing one it must have been updated to use Amazon Lex and AWS Lambda within the challenge submission period.

 

You are only limited by your own imagination when building your solution. Therefore, I will share some recommendations to help you to get your creative juices flowing when creating or deploying your bot. Some suggestions that can help you make your chatbot more distinctive are:

  • Deploy your bot to Slack, Facebook Messenger, or Twilio SMS
  • Take advantage of other AWS services when building your bot solution.
  • Incorporate Text-To-speech capabilities using a service like Amazon Polly
  • Utilize other third-party APIs, SDKs, and services
  • Leverage Amazon Lex pre-built enterprise connectors and add services like Salesforce, HubSpot, Marketo, Microsoft Dynamics, Zendesk, and QuickBooks as data sources.

There are cost effective ways to build your bot using AWS Lambda. Lambda includes a free tier of one million requests and 400,000 GB-seconds of compute time per month. This free, per month usage, is for all customers and does not expire at the end of the 12 month Free Tier Term. Furthermore, new Amazon Lex customers can process up to 10,000 text requests and 5,000 speech requests per month free during the first year. You can find details here.

Remember, the AWS Free Tier includes services with a free tier available for 12 months following your AWS sign-up date, as well as additional service offers that do not automatically expire at the end of your 12 month term. You can review the details about the AWS Free Tier and related services by going to the AWS Free Tier Details page.

 

Can We Talk – How It Works

The AWS Chatbot Challenge is open to individuals, and teams of individuals, who have reached the age of majority in their eligible area of residence at the time of competition entry. Organizations that employ 50 or fewer people are also eligible to compete as long at the time of entry they are duly organized or incorporated and validly exist in an eligible area. Large organizations-employing more than 50-in eligible areas can participate but will only be eligible for a non-cash recognition prize.

Chatbot Submissions are judged using the following criteria:

  • Customer Value: The problem or painpoint the bot solves and the extent it adds value for users
  • Bot Quality: The unique way the bot solves users’ problems, and the originality, creativity, and differentiation of the bot solution
  • Bot Implementation: Determination of how well the bot was built and executed by the developer. Also, consideration of bot functionality such as if the bot functions as intended and recognizes and responds to most common phrases asked of it

Prizes

The AWS Chatbot Challenge is awarding prizes for your hard work!

First Prize

  • $5,000 USD
  • $2,500 AWS Credits
  • Two (2) tickets to AWS re:Invent
  • 30 minute virtual meeting with the Amazon Lex team
  • Winning submission featured on the AWS AI blog
  • Cool swag

Second Prize

  • $3,000 USD
  • $1,500 AWS Credits
  • One (1) ticket to AWS re:Invent
  • 30 minute virtual meeting with the Amazon Lex team
  • Winning submission featured on the AWS AI blog
  • Cool swag

Third Prize

  • $2,000 USD
  • $1,000 AWS Credits
  • 30 minute virtual meeting with the Amazon Lex team
  • Winning submission featured on the AWS AI blog
  • Cool swag

 

Challenge Timeline

  • Submissions Start: April 19, 2017 at 12:00pm PDT
  • Submissions End: July 18, 2017 at 5:00pm PDT
  • Winners Announced: August 11, 2017 at 9:00am PDT

 

Up to the Challenge – Get Started

Are ready to get started on your chatbot and dive into the challenge? Here is how to get started:

Review the details on the challenge rules and eligibility

  1. Register for the AWS Chatbot Challenge
  2. Join the AWS Chatbot Slack Channel
  3. Create an account on AWS.
  4. Visit the Resources page for links to documentation and resources.
  5. Shoot your demo video that demonstrates your bot in action. Prepare a written summary of your bot and what it does.
  6. Provide a way to access your bot for judging and testing by including a link to your GitHub repo hosting the bot code and all deployment files and testing instructions needed for testing your bot.
  7. Submit your bot on AWSChatbot2017.Devpost.com before July 18, 2017 at 5 pm ET and share access to your bot, its Github repo and its deployment files.

Summary

With Amazon Lex you can build conversation into web and mobile applications, as well as use it to build chatbots that control IoT devices, provide customer support, give transaction updates or perform operations for DevOps workloads (ChatOps). Amazon Lex provides built-in integration with AWS Lambda, AWS Mobile Hub, and Amazon CloudWatch and allows for easy integrate with other AWS services so you can use the AWS platform for to build security, monitoring, user authentication, business logic, and storage into your chatbot or application. You can make additional enhancements to your voice or text chatbot by taking advantage of Amazon Lex’s support of chat services like Slack, Facebook Messenger, and Twilio SMS.

Dive into building chatbots and conversational interfaces with Amazon Lex and AWS Lambda with the AWS Chatbot Challenge for a chance to win some cool prizes. Some recent resources and online tech talks about creating bots with Amazon Lex and AWS Lambda that may help you in your bot building journey are:

If you have questions about the AWS Chatbot Challenge you can email [email protected] or post a question to the Discussion Board.

 

Good Luck and Happy Coding.

Tara

Ubuntu 17.04 (Zesty Zapus) released

Post Syndicated from jake original https://lwn.net/Articles/719981/rss

The most recent version of the Ubuntu Linux distribution, 17.04 or Zesty Zapus, has been released with multiple flavors (Kubuntu, Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu MATE,
Ubuntu Studio, Xubuntu, and the most recent addition, Ubuntu Budgie) and several editions (server, desktop, cloud). “Under the hood, there have been updates to many core packages, including
a new 4.10-based kernel, and much more.

Ubuntu Desktop has seen incremental improvements, with newer versions of
GTK and Qt, updates to major packages like Firefox and LibreOffice, and
stability improvements to Unity.

Ubuntu Server 17.04 includes the Ocata release of OpenStack, alongside
deployment and management tools that save devops teams time when
deploying distributed applications – whether on private clouds, public
clouds, x86, ARM, or POWER servers, z System mainframes, or on developer
laptops. Several key server technologies, from MAAS to juju, have been
updated to new upstream versions with a variety of new features.” See the release notes for more information.

Pollexy – Building a Special Needs Voice Assistant with Amazon Polly and Raspberry Pi

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/pollexy-building-a-special-needs-voice-assistant-with-amazon-polly-and-raspberry-pi/

April is Autism Awareness month and about 1 in 68 children in the U.S. have been identified with autism spectrum disorder (ASD) (CDC 2014). In this post from Troy Larson, a Sr. Devops Cloud Architect here at AWS, you get an introduction to a project he has been working on to help his son Calvin.

I have been asked how the minds at AWS come up with so many different ideas. Sometimes they come from a deeply personal place, where someone sees a way to help others. Pollexy is an amazing example of just that. Read about Pollexy and then watch the video here.

-Ana


Background

As a computer programming parent of a 16-year old non-verbal teenage boy with autism, I have been constantly searching over the years to find ways to use technology to make our lives together safer, happier and more comfortable. At the core of this challenge is the most basic of all human interaction—communication. While Calvin is able to respond to verbal instruction, he is not able to speak responsively. In his entire life, we’ve never had a conversation. He is able to be left alone in his room to play, but most every task or set of tasks requires a human to verbally prompt him along the way. Having other children and responsibilities in the home, at times the intensity of supervision can be negatively impactful on the home dynamic.

Genesis

When I saw the announcement of Amazon Polly and Amazon Lex at re:Invent last year, I immediately started churning on how we could leverage these technologies to assist Calvin. He responds well to human verbal prompts, but would he understand a digital voice? So one Saturday, I setup a Raspberry Pi in his room and closed his door and crouched around the corner with other family members so Calvin couldn’t see us. I connected to the Raspberry Pi and instructed Polly to speak in Joanna’s familiar pacific tone, “Calvin, it’s time to take a potty break. Go out of your bedroom and go to the bathroom.” In a few seconds, we heard his doorknob turn and I poked my head out of my hiding place. Calvin passed by, looking at me quizzically, then went into the bathroom as Joanna had instructed. We all looked at each other in amazement—he had listened and responded perfectly to the completely invisible voice of someone he’d never heard before. After discussing some ideas around this with co-workers, a colleague suggested I enter the IoT and AI Science Fair at our annual AWS Sales Kick-Off meeting. Less than two months after the Polly and Lex announcement and 3500 lines of code later, Pollexy—along with Calvin–debuted at the Science Fair.

Overview

Pollexy (“Polly” + “Lex”) is a Raspberry Pi and mobile-based special needs verbal assistant that lets caretakers schedule audio task prompts and messages both on a recurring schedule and/or on-demand. Caretakers can schedule regular medicine reminder messages or hourly bathroom break messages, for example, and at the same time use their Amazon Echo and mobile device to request a specific message be played immediately. Caretakers can even set it up so that the person needs to confirm that they’ve heard the message. For example, my son won’t pay attention to Pollexy unless Pollexy first asks him to “Push the blue button.” Pollexy will wait until he has pushed the button and then speak the actual message. Other people may be able to respond verbally using Lex, or not require a confirmation at all. Pollexy can be tailored to what works best.

And then most importantly—and most challenging—in a large house, how do we make sure the person is in the room where we play the message? What if we have a special needs adult living in an in-law suite? Are they in the living room or the kitchen? And what about multiple people? What if we have multiple people in different areas of the house, each of whom has a message? Let’s explore the basic elements and tie the pieces together.

Basic Elements of Pollexy

In the spirit of Amazon’s Leadership Principle “Invent and Simplify,” we want to minimize the complexity of the Pollexy architecture. We can break Pollexy down into three types of objects and three components, all of which work together in a way that’s easily explainable.

Object #1: Person

Pollexy can support any number of people. A person is a uniquely identifiable name. We can set basic preferences such as “requires confirmation” and most importantly, we can define a location schedule. This means that we can create an Outlook-like schedule that sets preferences where someone should be in the house.

Object #2: Location

A location is simply a uniquely identifiable location where a device is physically sitting. Based on the user’s location schedule, Pollexy will know which device to contact first, second, third, etc. We can also “mute” devices if needed (naptime, etc.)

Object #3: Message

Obviously, this is the actual message we want to play. Attached to each message is a person and a recurring schedule (only if it’s not a one-time message). We don’t store location with the message, because Pollexy figures out the person’s location when the message is ready to be delivered.

Component #1: Scheduler

Every message needs to be scheduled. This is a command-line tool where you basically say Tell “Calvin” that “you need to brush your teeth” every night at 8 p.m. This message is then stored in DynamoDB, waiting to be picked up by the queueing Lambda function.

Component #2: Queueing Engine

Every minute, a Lambda runs and checks the scheduler to see if there is a message or messages ready to be delivered. If a message is ready, it looks up the person’s location schedule and figures out where they are and then pushes the message or messages into an SQS queue for that location.

Component #3: Speaker Engine

Every minute on the Raspberry Pi device, the speaker engine spins up and checks the SQS for its location. If there are messages, then the speaker engine looks at the user’s preferences and initiates communication to convey the message. If the person doesn’t respond, the speaker engine will check if the person has a secondary location in their schedule and drop the message in the SQS Queue for that location. In the end, a message will either be delivered or eventually just timeout (if someone is out of the house for the day).

Respect and Freedom are the Keys

We often take our personal privacy and respect for granted, so imagine even for a special needs person, the lack of privacy and freedom around having a person constantly in your presence. This is exaggerated for those in the autism spectrum where invasion of personal space can escalate a sense of invasion, turning into anger and frustration. Pollexy becomes their own personal, gentle and never-flustered friend to coach to them along the way, giving them confidence, respect and the sense of privacy and freedom we all want to enjoy.

-Troy Larson

Introducing DnsControl – “DNS as Code” has Arrived

Post Syndicated from Craig Peterson original http://blog.serverfault.com/2017/04/11/introducing-dnscontrol-dns-as-code-has-arrived/

DNS at Stack Overflow is… complex.  We have hundreds of DNS domains and thousands of DNS records. We have gone from running our own BIND server to hosting DNS with multiple cloud providers, and we change things fairly often. Keeping everything up to date and synced at multiple DNS providers is difficult. We built DnsControl to allow us to perform updates easily and automatically across all providers we use.

The old way

Originally, our DNS was hosted by our own BIND servers, using artisanal, hand crafted zone files. Large changes involved liberal sed usage, and every change was pretty error prone. We decided to start using cloud DNS providers for performance reasons, but those each have their own web panels, which are universally painful to use. Web interfaces rarely have any import/export functionality, and generally lack change control, history tracking, or comments. We quickly decided that web panels were not how we wanted to manage our zones. 

Introducing DnsControl

DNSControl is the system we built to manage our DNS. It permits “describe once, use anywhere” DNS management. It consists of a few key components:

  1. A Domain Specific Language (DSL) for describing domains in a single, provider-independent way.
  2. An “interpreter” application that executes the DSL and creates a standardized representation of your desired DNS state.
  3. Back-end “providers” that sync the desired state to a DNS provider.

At the time of this writing we have 9 different providers implemented, with 3 more on the way shortly. We use it to manage our domains with our own BIND servers, as well as Route 53, Google Cloud DNS, name.com, Cloudflare, and more.

A sample might look like this description of stackoverflow.com:

D(“stackoverflow.com”, REG_NAMEDOTCOM, DnsProvider(R53), DnsProvider(GCLOUD),
    A([email protected], “198.252.206.16”),
    A(“blog”, “198.252.206.20”),
    CNAME(“chat”, “chat.stackexchange.com.”),
    CNAME(“www”, [email protected], TTL(3600)),
    A(“meta”, “198.252.206.16”)
)

This is just a small, simple example. The DSL is a fully-featured way to express your DNS config. It is actually just javascript with some helpful functions. We have an examples page with more examples of the power of the language.

Running “dnscontrol preview” with this input will show what updates would be needed to bring DNS providers up to date with the new, desired, configuration. “dnscontrol push” will actually make the changes.

This allows us to manage our DNS configuration as code. Storing it this way has a bunch of advantages:

  • We can use variables to store common IP addresses or repeated data. We can make complicated changes, like failing-over services between data centers, by changing a single variable. We can activate or deactivate our CDN, which involves thousands of record changes, by commenting or uncommenting a single line of code.
  • We are not locked into any single provider, since the automation can sync to any of them. Keeping records synchronized between different cloud providers requires no manual steps.
  • We store our DNS config in git. Our build server runs all changes. We have central logging, access control, and history for our DNS changes. We’re trying to apply DevOps best practices to an area that has not seen those benefits so much yet.

I think the biggest benefit to this tool though is the freedom it has given us with our DNS.  It has allowed us to:

  • Switch providers with no fear of breaking things. We have changed CDNs or DNS providers at least 4 times in the last two years, and it has never been scary at all.
  • Dual-host our DNS with multiple providers simultaneously. The tool keeps them in sync for us.
  • Test fail-over procedures before an emergency happens. We are confident we can point DNS at our secondary datacenter easily, and we can quickly switch providers if one is being DDOSed.

DNS configuration is often difficult and error-prone.  We hope DnsControl makes it easy and more reliable. It has for us.

Some resources:

Welcome to the Newest AWS Community Heroes (Spring 2017)

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/welcome-to-the-newest-aws-community-heroes-spring-2017/

We would like to extend a very warm welcome to the newest AWS Community Heroes:

AWS Community Heroes share their knowledge and demonstrate their enthusiasm for AWS in a plethora of ways. They go above and beyond to share AWS insights via social media, blog posts, open source projects, and through in-person events, user groups, and workshops.


Mark Nunnikhoven
Mark Nunnikhoven explores the impact of technology on individuals, organizations, and communities through the lens of privacy and security. Asking the question, “How can we better protect our information?” Mark studies the world of cybercrime to better understand the risks and threats to our digital world.

As the Vice President of Cloud Research at Trend Micro, a long time Amazon Web Services Advanced Technology Partner and provider of security tools for the AWS Cloud, Mark uses that knowledge to help organizations around the world modernize their security practices by taking advantage of the power of the AWS Cloud.

With a strong focus on automation, he helps bridge the gap between DevOps and traditional security through his writing, speaking, teaching, and by engaging with the AWS community.

 

SangUk Park
SangUk Park is a Chief Solutions Architect at Megazone, which became Korea’s first AWS Partner in 2012 and is the only AWS Premier Consulting Partner to provide AWS support in Korean.

He served as a System Architect for KT’s public cloud and VDI design, and led the system operation of YDOnline and Nexon Japan, one of the leading online gaming companies. Certified both as an AWS Solutions Architect – Professional and AWS DevOps Engineer – Professional, SangUk has authored AWS books, including DevOps and AWS Cloud Design Patterns, and translated four books related to the AWS Cloud.

He’s been making efforts to revitalize the local AWS Korea User Group community as co-leader by presenting at AWS Korea User Group meetings and AWS Summits, and helping to establish small group gatherings such as the AWSKRUG System Engineers in Gangnam. Also, he has done many hands-on labs and has been running a booth as a leader of the user groups at AWS events to cultivate developers and system engineers.

SangUk maintains a close relationship with the Japanese AWS User Group (JAWS UG), using his excellent Japanese communication skills and experiences in Japan. He makes every effort to participate in events held between Japanese and Korean user groups as a facilitator and translator, and will promote cross-regional communications beyond APAC going forward.

 

James Hall
James Hall has been working in the digital sector for over a decade. He is the author of the popular jsPDF library, and is a founder/Director of Parallax, a digital agency in the UK. He’s worked as a software developer on a wide variety of projects, from LED Billboards, car unlocking apps, to large web applications and tools.

Parallax built an online recording studio for David Guetta and UEFA using Serverless technology shortly after API Gateway was released. Since then they have consulted on various serverless projects and technologies. They run the AWS Meetup in Leeds, and help companies around the world build their businesses online. James has contributed to and promotes the Serverless Framework which allows you to elegantly build web applications on top of Lambda and related services.

 

Drew Firment
Drew Firment works with business leaders and technology teams from organizations that seek to accelerate cloud adoption. He has over twenty years of experience leading large-scale technology programs, enterprise platforms, and cultural transformations in a fast-paced agile environment.

After migrating Capital One’s early adopters of AWS into production, his focus shifted toward accelerating a scaleable and sustainable transition to cloud computing. Drew pioneered the intersection of strategy, governance, engineering, agile, and education to drive an enterprise-wide talent transformation. He founded Capital One’s cloud engineering college, and implemented an innovative outcome-based curriculum oriented towards learning communities. Several thousand employees have enrolled in his cloud-fluency program, enabling well over 1,000 AWS certifications since its inception.

Drew has earned all three of the AWS associate-level certifications, enjoys developing custom Amazon Alexa skills using AWS Lambda, and believes serverless is the future of cloud computing. He also serves as an advisory partner to A Cloud Guru and is editor-in-chief of the their community-sourced publication.

Welcome
Please join me in welcoming to our newest AWS Community Heroes!

-Ana

Implementing DevSecOps Using AWS CodePipeline

Post Syndicated from Ramesh Adabala original https://aws.amazon.com/blogs/devops/implementing-devsecops-using-aws-codepipeline/

DevOps is a combination of cultural philosophies, practices, and tools that emphasizes collaboration and communication between software developers and IT infrastructure teams while automating an organization’s ability to deliver applications and services rapidly, frequently, and more reliably.

CI/CD stands for continuous integration and continuous deployment. These concepts represent everything related to automation of application development and the deployment pipeline — from the moment a developer adds a change to a central repository until that code winds up in production.

DevSecOps covers security of and in the CI/CD pipeline, including automating security operations and auditing. The goals of DevSecOps are to:

  • Embed security knowledge into DevOps teams so that they can secure the pipelines they design and automate.
  • Embed application development knowledge and automated tools and processes into security teams so that they can provide security at scale in the cloud.

The Security Cloud Adoption Framework (CAF) whitepaper provides prescriptive controls to improve the security posture of your AWS accounts. These controls are in line with a DevOps blog post published last year about the control-monitor-fix governance model.

Security CAF controls are grouped into four categories:

  • Directive: controls establish the governance, risk, and compliance models on AWS.
  • Preventive: controls protect your workloads and mitigate threats and vulnerabilities.
  • Detective: controls provide full visibility and transparency over the operation of your deployments in AWS.
  • Responsive: controls drive remediation of potential deviations from your security baselines.

To embed the DevSecOps discipline in the enterprise, AWS customers are automating CAF controls using a combination of AWS and third-party solutions.

In this blog post, I will show you how to use a CI/CD pipeline to automate preventive and detective security controls. I’ll use an example that show how you can take the creation of a simple security group through the CI/CD pipeline stages and enforce security CAF controls at various stages of the deployment. I’ll use AWS CodePipeline to orchestrate the steps in a continuous delivery pipeline.

These resources are being used in this example:

  • An AWS CloudFormation template to create the demo pipeline.
  • A Lambda function to perform the static code analysis of the CloudFormation template.
  • A Lambda function to perform dynamic stack validation for the security groups in scope.
  • An S3 bucket as the sample code repository.
  • An AWS CloudFormation source template file to create the security groups.
  • Two VPCs to deploy the test and production security groups.

These are the high-level security checks enforced by the pipeline:

  • During the Source stage, static code analysis for any open security groups. The pipeline will fail if there are any violations.
  • During the Test stage, dynamic analysis to make sure port 22 (SSH) is open only to the approved IP CIDR range. The pipeline will fail if there are any violations.

demo_pipeline1

 

These are the pipeline stages:

1. Source stage: In this example, the pipeline gets the CloudFormation code that creates the security group from S3, the code repository service.

This stage passes the CloudFormation template and pipeline name to a Lambda function, CFNValidateLambda. This function performs the static code analysis. It uses the regular expression language to find patterns and identify security group policy violations. If it finds violations, then Lambda fails the pipeline and includes the violation details.

Here is the regular expression that Lambda function using for static code analysis of the open SSH port:

"^.*Ingress.*(([fF]rom[pP]ort|[tT]o[pP]ort).\s*:\s*u?.(22).*[cC]idr[iI]p.\s*:\s*u?.((0\.){3}0\/0)|[cC]idr[iI]p.\s*:\s*u?.((0\.){3}0\/0).*([fF]rom[pP]ort|[tT]o[pP]ort).\s*:\s*u?.(22))"

2. Test stage: After the static code analysis is completed successfully, the pipeline executes the following steps:

a. Create stack: This step creates the stack in the test VPC, as described in the test configuration.

b. Stack validation: This step triggers the StackValidationLambda Lambda function. It passes the stack name and pipeline name in the event parameters. Lambda validates the security group for the following security controls. If it finds violations, then Lambda deletes the stack, stops the pipeline, and returns an error message.

The following is the sample Python code used by AWS Lambda to check if the SSH port is open to the approved IP CIDR range (in this example, 72.21.196.67/32):

for n in regions:
    client = boto3.client('ec2', region_name=n)
    response = client.describe_security_groups(
        Filters=[{'Name': 'tag:aws:cloudformation:stack-name', 'Values': [stackName]}])
    for m in response['SecurityGroups']:
        if "72.21.196.67/32" not in str(m['IpPermissions']):
            for o in m['IpPermissions']:
                try:
                    if int(o['FromPort']) <= 22 <= int(o['ToPort']):
                        result = False
                        failReason = "Found Security Group with port 22 open to the wrong source IP range"
                        offenders.append(str(m['GroupId']))
                except:
                    if str(o['IpProtocol']) == "-1":
                        result = False
                        failReason = "Found Security Group with port 22 open to the wrong source IP range"
                        offenders.append(str(n) + " : " + str(m['GroupId']))

c. Approve test stack: This step creates a manual approval task for stack review. This step could be eliminated for automated deployments.

d. Delete test stack: After all the stack validations are successfully completed, this step deletes the stack in the test environment to avoid unnecessary costs.

3. Production stage: After the static and dynamic security checks are completed successfully, this stage creates the stack in the production VPC using the production configuration supplied in the template.

a. Create change set: This step creates the change set for the resources in the scope.

b. Execute change set: This step executes the change set and creates/updates the security group in the production VPC.

 

Source code and CloudFormation template

You’ll find the source code at https://github.com/awslabs/automating-governance-sample/tree/master/DevSecOps-Blog-Code

basic-sg-3-cfn.json creates the pipeline in AWS CodePipeline with all the stages previously described. It also creates the static code analysis and stack validation Lambda functions.

The CloudFormation template points to a shared S3 bucket. The codepipeline-lambda.zip file contains the Lambda functions. Before you run the template, upload the zip file to your S3 bucket and then update the CloudFormation template to point to your S3 bucket location.

The CloudFormation template uses the codepipe-single-sg.zip file, which contains the sample security group and test and production configurations. Update these configurations with your VPC details, and then upload the modified zip file to your S3 bucket.

Update these parts of the code to point to your S3 bucket:

 "S3Bucket": {
      "Default": "codepipeline-devsecops-demo",
      "Description": "The name of the S3 bucket that contains the source artifact, which must be in the same region as this stack",
      "Type": "String"
    },
    "SourceS3Key": {
      "Default": "codepipe-single-sg.zip",
      "Description": "The file name of the source artifact, such as myfolder/myartifact.zip",
      "Type": "String"
    },
    "LambdaS3Key": {
      "Default": "codepipeline-lambda.zip",
      "Description": "The file name of the source artifact of the Lambda code, such as myfolder/myartifact.zip",
      "Type": "String"
    },
	"OutputS3Bucket": {
      "Default": "codepipeline-devsecops-demo",
      "Description": "The name of the output S3 bucket that contains the processed artifact, which must be in the same region as this stack",
      "Type": "String"
    },

After the stack is created, AWS CodePipeline executes the pipeline and starts deploying the sample CloudFormation template. In the default template, security groups have wide-open ports (0.0.0.0/0), so the pipeline execution will fail. Update the CloudFormation template in codepipe-single-sg.zip with more restrictive ports and then upload the modified zip file to S3 bucket. Open the AWS CodePipeline console, and choose the Release Change button. This time the pipeline will successfully create the security groups.

demo_pipeline2

You could expand the security checks in the pipeline to include other AWS resources, not just security groups. The following table shows the sample controls you could enforce in the pipeline using the static and dynamic analysis Lambda functions.

demo_pipeline3
If you have feedback about this post, please add it to the Comments section below. If you have questions about implementing the example used in this post, please open a thread on the Developer Tools forum.

Replicating and Automating Sync-Ups for a Repository with AWS CodeCommit

Post Syndicated from Cherry Zhou original https://aws.amazon.com/blogs/devops/replicating-and-automating-sync-ups-for-a-repository-with-aws-codecommit/

by Chenwei (Cherry) Zhou, Software Development Engineer


 

Many of our customers have expressed interest in the following scenarios:

  • Backing up or replicating an AWS CodeCommit repository to another AWS region.
  • Automatically backing up repositories currently hosted on other services (for example, GitHub or BitBucket) to AWS CodeCommit.

In this blog post, we’ll show you how to automate the replication of a source repository to a repository in AWS CodeCommit. Your source repository could be another AWS CodeCommit repository, a local repository, or a repository hosted on other Git services.

To replicate your repository, you’ll first need to set up a repository in AWS CodeCommit to use as your backup/replica repository. After replicating the contents in your source repository to the backup repository, we’ll demonstrate how you can set up a scheduled job to periodically sync up your source repository with the backup/replica.

Where do I host this?

You can host your local repository and schedule your task on your own machine or on an Amazon EC2 instance. For an example of how to set up an EC2 instance for access to an AWS CodeCommit repository, including a sample AWS CloudFormation template for launching the instance, see Launch an Amazon EC2 Instance to Access the AWS CodeCommit Repository in the AWS for DevOps Guide.

 

Part 1: Set Up a Replica Repository

In this section, we’ll create an AWS CodeCommit repository and replicate your source repository to it.

  1. If you haven’t already done so, set up for AWS CodeCommit. Then follow the steps to create a CodeCommit repository in the region of your choice. Choose a name that will help you remember that this repository is a replica or backup repository. For example, you could create a repository in the US East (Ohio) region and name it MyReplicaRepo. This is the name and region we’ll use in this post.
  2. Use the git clone --mirror command to clone the source repository, including the directory where you want to create the local repo, to your local computer. You are not cloning the repository you just created in AWS CodeCommit. You are cloning the repository you want to replicate or back up to that AWS CodeCommit repository. For example, to clone a sample application created for AWS demonstration purposes and hosted on GitHub (https://github.com/awslabs/aws-demo-php-simple-app.git) to a local repo in a directory named my-repo-replica:
git clone --mirror https://github.com/awslabs/aws-demo-php-simple-app.git my-repo-replica

IMPORTANT

  • DO NOT use your working directory as the local clone repository. Your work-in-progress commits would also be pushed for backup.
  • DO NOT make local changes to this local repository. It should be used for sync-up operations only.
  • DO NOT manually push any changes to this replica repository. It will cause conflicts later when your scheduled job pushes changes in the source repository. Treat it as a read-only repository, and push all of your development changes to your source repository.
  1. Change directories to the directory where you made the clone:
cd my-repo-replica
  1. Use the git remote add RemoteName RemoteRepositoryURL command to add the AWS CodeCommit repository you created as a remote repository for the local repo. Use an appropriate nickname, such as sync. (Because this is a mirror, the default nickname, origin, will already be in use.) For example, to add your AWS CodeCommit repository MyReplicaRepo as a remote for my-repo-replica with the nickname sync:
git remote add sync ssh://git-codecommit.us-east-2.amazonaws.com/v1/repos/MyReplicaRepo

When you push large repositories, consider using SSH instead of HTTPS. When you push a large change, a large number of changes, or a large repository, long-running HTTPS connections are often terminated prematurely due to networking issues or firewall settings. For more information about setting up AWS CodeCommit for SSH, see For SSH Connections on Linux, macOS, or Unix or For SSH Connections on Windows.

Tip

Use the git remote show command to review the list of remotes set for your local repo.

  1. Run the git push sync --mirror command to push to your replica repository.
  • If you named your remote for the replica repository something else, replace sync with your remote name.
  • The --mirror option specifies that all refs under refs/ (which includes, but is not limited to, refs/heads/, refs/remotes/, and refs/tags/) will be mirrored to the remote repository. If you only want to push branches and commits, but don’t care if you push other references such as tags, you can use the --all option instead.

 

Your replica repository is now ready for sync-up operations. To do a manual sync, run git pull to pull from your original repository, and then run git push sync --mirror to push to the replica repository. Again, do not push any local changes to your replica repository at any time.

 

Part 2: Create a Periodic Sync Job

You can use a number of tools to set up an automated sync job. In this section, we’ll briefly cover four common tools: a cron job (Linux), a task in Windows Task Scheduler (Windows), a launchd instance (macOS), and, for those users who already have a Jenkins server set up, a Freestyle project with build triggers. Feel free to use whatever tools are best for you.

Note

Some hosted repositories offer options for syncing repositories, such as Git hooks, notifications, and other triggers. To learn more about those options, consult the documentation for your source repository system.

 

All of the following approaches rely on commands that pull the latest changes from the source repository to your local clone repo, and then mirror those changes to your AWS CodeCommit repository. They can be summed up as follows:

cd /path/to/your/local/repo git pull
git push sync --mirror

Where and how you save and schedule these commands depends on your operating system and tool(s). We’ve included just a few options/examples from a variety of approaches.

 

In Linux:

  1. At the terminal, run the crontab -e command to edit your crontab file in your default editor.
  2. Add a line for a new cron job that will change directories to your local clone repo, pull from your source repository, and mirror any changes to your AWS CodeCommit repository on the schedule you specify. For example, to run a daily job at 2:45 A.M. for a local repo named my-repo-replica in the ~/tmp directory where you nicknamed your remote (the AWS CodeCommit repository) sync, your new line might look like this:
45 2 * * * cd ~/tmp/my-repo-replica && git pull && git push sync --mirror
  1. Save the crontab file and exit your editor.

 

In Windows:

  1. Create a batch file that contains the command to change directories to your local clone repo, pull from your source repository, and mirror any changes up to your AWS CodeCommit repository. For example, if you created your local repo my-repo-replica in a c:\temp directory, and you nicknamed your remote (the AWS CodeCommit repository) sync, your file might look like this:
cd /d c:\temp\my-repo-replica
git pull
git push sync --mirror
  1. Save the batch file with a name like my-repo-backup.bat.
  2. Open Task Scheduler. (Not sure how? The simplest way is to open a command line and run msc.)
  3. In Actions, choose Create Basic Task, and then follow the steps in the wizard.

 

In macOS:

  1. Create a shell script that contains the command to change directories to your local clone repo, pull from your source repository, and mirror any changes up to your AWS CodeCommit repository. For example, if you created your local repo my-repo-replica in a ~/Documents directory, and you nicknamed your remote (the AWS CodeCommit repository) sync, your file might look like this:
cd ~/Documents/my-repo-replica
git pull
git push sync --mirror
  1. Save the shell script with a name like my-repo-backup.sh.
  2. Create a launchd property list file that runs the shell script on the schedule you specify. For example, if you stored my-repo-backup.sh in ~/Documents, to run the script daily at 2:45 A.M., your plist file might look like this:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.example.codecommit.backup</string>
    <key>ProgramArguments</key>
    <array>
        <string>~/Documents/my-repo-backup.sh</string>
    </array>
    <key>StartCalendarInterval</key>
    <dict>
        <key>Minute</key>
        <integer>45</integer>
        <key>Hour</key>
        <integer>2</integer>
    </dict>
</dict>
</plist>
  1. Save your plist file in ~/Library/LaunchAgents, /Library/LaunchAgents, or /Library/LaunchDaemons folder, depending on the definition you want for the job.
  2. Run the launtchctl command to load your job. For example, if you want to load a plist file named codecommit.sync.plist in ~/Library/LaunchAgents, your command might look like this:
launchctl load ~/Library/LaunchAgents/codecommit.sync.plist

 

For Jenkins:

  1. Open Jenkins.
  2. Create a new job as a Freestyle project.

codecommit_replicate_new_project

  1. In the Build Triggers section, select Build periodically, and set up a schedule for the task. Jenkins uses cron expressions to run periodic tasks. For more information, see the Jenkins documentation for the syntax of cron.

If you are replicating a GitHub or BitBucket repository, you can also set the task to build when the Git hook is triggered.

The following example builds once a day between midnight and 1 A.M.

codecommit_replicate_build_triggers

  1. In the Build section, add a build step and choose Execute Windows batch command or Execute Shell. Then write a script and implement the Git operations:
cd /path/to/your/local/repo git pull
git push sync --mirror

Note: Jenkins may require the full path for Git.

The following example is a Windows batch command file, with the full path for Git on the host.

codecommit_replicate_build

  1. Save the configuration for the task.

 

Your AWS CodeCommit replica repository will now be automatically updated with any changes to your source repository as scheduled.

We hope you’ve enjoyed this blog post. If you have questions or suggestions for future blog post, please leave it in the comments below or visit our user forum!

 

Maximising site performance: 5 key considerations

Post Syndicated from Davy Jones original https://www.anchor.com.au/blog/2017/03/maximising-site-performance-key-considerations/

The ongoing performance of your website or application is an area where ‘not my problem’ can be a recurring sentiment from all stakeholders.  It’s not just a case of getting your shiny new website or application onto the biggest, spec-ed-up, dedicated server or cloud instance that money can buy because there are many factors that can influence the performance of your website that you, yes you, need to make friends with.

The relationship between site performance and business outcomes

Websites have evolved into web applications, starting out as simple text in html format to complex, ‘rich’ multimedia content requiring buckets of storage and computing power. Your server needs to run complex scripts and processes, and serve up content to global visitors because let’s face it, you probably have customers everywhere (or at least have plans to achieve a global customer base ). It is a truth universally acknowledged, that the performance of your website is directly related to customer experience, so underestimating the impact of having poor site performance will absolutely affect your brand reputation, sales revenue and business outcomes negatively, jeopardising your business’ success.

Site performance stakeholders

There is an increasing range of literature around the growing importance of optimising site performance for maximum customer experience but who is responsible for owning the customer site experience? Is it the marketing team, development team, digital agency or your hosting provider? The short answer is that all of the stakeholders can either directly or indirectly impact your site performance.

Let’s explore this shared responsibility in more detail, let’s break it down into five areas that affect a website’s performance.

5 key site performance considerations

In order to truly appreciate the performance of your website or application, you must take into consideration 5 key areas that affect your website’s ability to run at maximum performance:

  1. Site Speed
  2. Reliability and availability
  3. Code Efficiency
  4. Scalability
  5. Development Methodology
1. Site Speed

Site speed is the most critical metric. We all know and have experienced the frustration of “this site is slow, it takes too long to load!”. It’s the main (and sometimes, only) metric that most people would think about when it comes to the performance of a web application.

But what does it mean for a site to be slow? Well, it usually comes down to these factors:

a. The time it takes for the server to respond to a visitor requesting a page.
b. The time it takes to download all necessary content to display the website.
c.  The time it takes for your browser to load and display all the content.

Usually, the hosting provider will look over  (a), and the developers would look over (b) and (c), as those points are directly related to the web application.

2. Reliability and availability

Reliability and availability go hand-in-hand.

There’s no point in having a fast website if it’s not *reliably* fast. What do we mean by that?

Well, would you be happy if your website was only fast sometimes? If your Magento retail store is lightning fast when you are the only one using it, but becomes unresponsive during a sale, then the service isn’t performing up to scratch. The hosting provider has to provide you with a service that stays up, and can withstand the traffic going to it.

Outages are also inevitable, as 100% uptime is a myth. But with some clever infrastructure designs, we can minimise downtime as close to zero as we can get! Here at Anchor, our services are built with availability in mind. If your service is inaccessible, then it’s not reliable.

Our multitude of hosting options on offer such as VPS, dedicated and cloud are designed specifically for your needs. Proactive and reactive support, and hands-on management means your server stays reliable and available.

We know some businesses are concerned about the very public outage of AWS in the US recently, however AWS have taken action across all regions to prevent this from occurring again. AWS’s detailed response can be found at S3 Service Disruption in the Northern Virginia (US-EAST-1) Region.

As an advanced consulting partner with Amazon Web Services (AWS), we can guide customers through the many AWS configurations that will deliver the reliability required.  Considerations include utilising multiple availability zones, read-only replicas, automatic backups, and disaster recovery options such as warm standby.  

3. Code Efficiency

Let’s talk about efficiency of a codebase, that’s the innards of the application.

The code of an application determines how hard the CPU (the brain of your computer) has to work to process all the things the application wants to be able to do. The more work your application performs, the harder the CPU has to work to keep up.

In short, you want code to be efficient, and not have to do extra, unnecessary work. Here is a quick example:

# Example 1:    2 + 2 = 4

# Example 2:    ( ( 1 + 5) / 3 ) * 1 ) + 2 = 4

The end result is the same, but the first example gets straight to the point. It’s much easier to understand and faster to process. Efficient code means the server is able to do more with the same amount of resources, and most of the time it would also be faster!

We work with many code efficient partners who create awesome sites that drive conversions.  Get in touch if you’re looking for a code efficient developer, we’d be happy to suggest one of our tried and tested partners

4. Scalability

Accurately predicting the spikes in traffic to your website or application is tricky business.  Over or under-provisioning of infrastructure can be costly, so ensuring that your build has the potential to scale can help your website or application to optimally perform at all times.  Scaling up involves adding more resources to the current systems. Scaling out involves adding more nodes. Both have their advantages and disadvantages. If you want to know more, feel free to talk to any member of our sales team to get started.

If you are using a public cloud infrastructure like Amazon Web Services (AWS) there are several ways that scalability can be built into your infrastructure from the start.  Clusters are at the heart of scalability and there are a number of tools can optimise your cluster efficiency such as Amazon CloudWatch, that can trigger scaling activities, and Elastic Load Balancing to direct traffic to the various clusters within your auto scaling group.  For developers wanting complete control over AWS resources, Elastic Beanstalk may be more appropriate.

5. Development Methodology

Development methodologies describe the process of what needs to happen in order to introduce changes to software. A commonly used methodology nowadays is the ‘DevOps’ methodology.

What is DevOps?

It’s the union of Developers and IT Operations teams working together to achieve a common goal.

How can it improve your site’s performance?

Well, DevOps is a way of working, a culture that introduces close collaboration between the two teams of Developers and IT Operations in a single workflow.   By integrating these teams the process of creating, testing and deploying software applications can be streamlined. Instead of each team working in a silo, cross-functional teams work together to efficiently solve problems to get to a stable release faster. Faster releases mean that your website or application gets updates more frequently and updating your application more frequently means you are faster to fix bugs and introduce new features. Check out this article ‘5 steps to prevent your website getting hacked‘ for more details. 

The point is the faster you can update your applications the faster it is for you to respond to any changes in your situation.  So if DevOps has the potential to speed up delivery and improve your site or application performance, why isn’t everyone doing it?

Simply put, any change can be hard. And for a DevOps approach to be effective, each team involved needs to find new ways of working harmoniously with other teams toward a common goal. It’s not just a process change that is needed, toolsets, communication and company culture also need to be addressed.

The Anchor team love putting new tools through their paces.  We love to experiment and iterate on our processes in order to find one that works with our customers. We are experienced in working with a variety of teams, and love to challenge ourselves. If you are looking for an operations team to work with your development team, get in touch.

***
If your site is running slow or you are experiencing downtime, we can run a free hosting check up on your site and highlight the ‘quick wins’ on your site to boost performance.

The post Maximising site performance: 5 key considerations appeared first on AWS Managed Services by Anchor.

How to Access the AWS Management Console Using AWS Microsoft AD and Your On-Premises Credentials

Post Syndicated from Vijay Sharma original https://aws.amazon.com/blogs/security/how-to-access-the-aws-management-console-using-aws-microsoft-ad-and-your-on-premises-credentials/

AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, is a managed Microsoft Active Directory (AD) hosted in the AWS Cloud. Now, AWS Microsoft AD makes it easy for you to give your users permission to manage AWS resources by using on-premises AD administrative tools. With AWS Microsoft AD, you can grant your on-premises users permissions to resources such as the AWS Management Console instead of adding AWS Identity and Access Management (IAM) user accounts or configuring AD Federation Services (AD FS) with Security Assertion Markup Language (SAML).

In this blog post, I show how to use AWS Microsoft AD to enable your on-premises AD users to sign in to the AWS Management Console with their on-premises AD user credentials to access and manage AWS resources through IAM roles.

Background

AWS customers use on-premises AD to administer user accounts, manage group memberships, and control access to on-premises resources. If you are like many AWS Microsoft AD customers, you also might want to enable your users to sign in to the AWS Management Console using on-premises AD credentials to manage AWS resources such as Amazon EC2, Amazon RDS, and Amazon S3.

Enabling such sign-in permissions has four key benefits:

  1. Your on-premises AD group administrators can now manage access to AWS resources with standard AD administration tools instead of IAM.
  2. Your users need to remember only one identity to sign in to AD and the AWS Management Console.
  3. Because users sign in with their on-premises AD credentials, access to the AWS Management Console benefits from your AD-enforced password policies.
  4. When you remove a user from AD, AWS Microsoft AD and IAM automatically revoke their access to AWS resources.

IAM roles provide a convenient way to define permissions to manage AWS resources. By using an AD trust between AWS Microsoft AD and your on-premises AD, you can assign your on-premises AD users and groups to IAM roles. This gives the assigned users and groups the IAM roles’ permissions to manage AWS resources. By assigning on-premises AD groups to IAM roles, you can now manage AWS access through standard AD administrative tools such as AD Users and Computers (ADUC).

After you assign your on-premises users or groups to IAM roles, your users can sign in to the AWS Management Console with their on-premises AD credentials. From there, they can select from a list of their assigned IAM roles. After they select a role, they can perform the management functions that you assigned to the IAM role.

In the rest of this post, I show you how to accomplish this in four steps:

  1. Create an access URL.
  2. Enable AWS Management Console access.
  3. Assign on-premises users and groups to IAM roles.
  4. Connect to the AWS Management Console.

Prerequisites

The instructions in this blog post require you to have the following components running:

Note: You can assign IAM roles to user identities stored in AWS Microsoft AD. For this post, I focus on assigning IAM roles to user identities stored in your on-premises AD. This requires a forest trust relationship between your on-premises Active Directory and your AWS Microsoft AD directory.

Solution overview

For the purposes of this post, I am the administrator who manages both AD and IAM roles in my company. My company wants to enable all employees to use on-premises credentials to sign in to the AWS Management Console to access and manage their AWS resources. My company uses EC2, RDS, and S3. To manage administrative permissions to these resources, I created a role for each service that gives full access to the service. I named these roles EC2FullAccess, RDSFullAccess, and S3FullAccess.

My company has two teams with different responsibilities, and we manage users in AD security groups. Mary is a member of the DevOps security group and is responsible for creating and managing our RDS databases, running data collection applications on EC2, and archiving information in S3. John and Richard are members of the BIMgrs security group and use EC2 to run analytics programs against the database. Though John and Richard need access to the database and archived information, they do not need to operate those systems. They do need permission to administer their own EC2 instances.

To grant appropriate access to the AWS resources, I need to assign the BIMgrs security group in AD to the EC2FullAccess role in IAM, and I need to assign the DevOps group to all three roles (EC2FullAccess, RDSFullAccess, and S3FullAccess). Also, I want to make sure all our employees have adequate time to complete administrative actions after signing in to the AWS Management Console, so I increase the console session timeout from 60 minutes to 240 minutes (4 hours).

The following diagram illustrates the relationships between my company’s AD users and groups and my company’s AWS roles and services. The left side of the diagram represents my on-premises AD that contains users and groups. The right side represents the AWS Cloud that contains the AWS Management Console, AWS resources, IAM roles, and our AWS Microsoft AD directory connected to our on-premises AD via a forest trust relationship.

NEWDiagram-VijayS-a

Let’s get started with the steps for this scenario. For this post, I have already created an AWS Microsoft AD directory and established a two-way forest trust from AWS Microsoft AD to my on-premises AD. To manage access to AWS resources, I have also created the following IAM roles:

  • EC2FullAccess: Provides full access to EC2 and has the AmazonEC2FullAccess AWS managed policy attached.
  • RDSFullAccess: Provides full access to RDS via the AWS Management Console and has the AmazonRDSFullAccess managed policy attached.
  • S3FullAccess: Provides full access to S3 via the AWS Management Console and has the AmazonS3FullAccess managed policy attached.

To learn more about how to create IAM roles and attach managed policies, see Attaching Managed Policies.

Note: You must include a Directory Service trust policy on all roles that require access by users who sign in to the AWS Management Console using Microsoft AD. To learn more, see Editing the Trust Relationship for an Existing Role.

Step 1 – Create an access URL

The first step to enabling access to the AWS Management Console is to create a unique Access URL for your AWS Microsoft AD directory. An Access URL is a globally unique URL. AWS applications, such as the AWS Management Console, use the URL to connect to the AWS sign-in page that is linked to your AWS Microsoft AD directory. The Access URL does not provide any other access to your directory. To learn more about Access URLs, see Creating an Access URL.

Follow these steps to create an Access URL:

  1. Navigate to the Directory Service Console and choose your AWS Microsoft AD Directory ID.
  2. On the Directory Details page, choose the Apps & Services tab, type a unique access alias in the Access URL box, and then choose Create Access URL to create an Access URL for your directory.
    Screenshot of creating an Access URL

Your directory Access URL should be in the following format: <access-alias>.awsapps.com. In this example, I am using https://example-corp.awsapps.com.

Step 2 – Enable AWS Management Console access

To allow users to sign in to AWS Management Console with their on-premises credentials, you must enable AWS Management Console access for your AWS Microsoft AD directory:

  1. From the Directory Service console, choose your AWS Microsoft AD Directory ID. Choose the AWS Management Console link in the AWS apps & services section.
    Screenshot of choosing the AWS Management Console link
  2. In the Enable AWS Management Console dialog box, choose Enable Access to enable console access for your directory.
    Screenshot of choosing Enable Access

This enables AWS Management Console access for your AWS Microsoft AD directory and provides you a URL that you can use to connect to the console. The URL is generated by appending “/console” to the end of the access URL that you created in Step 1: <access-alias>.awsapps.com/console. In this example, the AWS Management Console URL is https://example-corp.awsapps.com/console.
Screenshot of the URL to connect to the console

Step 3 – Assign on-premises users and groups to IAM roles

Before you users can use your Access URL to sign in to the AWS Management Console, you need to assign on-premises users or groups to IAM roles. This critical step enables you to control which AWS resources your on-premises users and groups can access from the AWS Management Console.

In my on-premises Active Directory, Mary is already a member of the DevOps group, and John and Richard are members of the BIMgrs group. I already set up the trust from AWS Microsoft AD to my on-premises AD, and I already created the EC2FullAccess, RDSFullAccess, and S3FullAccess roles that I will use.

I am now ready to assign on-premises groups to IAM roles. I do this by assigning the DevOps group to the EC2FullAccess, RDSFullAccess, and S3FullAccess IAM roles, and the BIMgrs group to the EC2FullAccess IAM role. Follow these steps to assign on-premises groups to IAM roles:

  1. Open the Directory Service details page of your AWS Microsoft AD directory and choose the AWS Management Console link on the Apps & services tab. Choose Continue to navigate to the Add Users and Groups to Roles page.
    Screenshot of Manage access to AWS Resources dialog box
  2. On the Add Users and Groups to Roles page, I see the three IAM roles that I have already configured (shown in the following screenshot). If you do not have any IAM roles with a Directory Service trust policy enabled, you can create new roles or enable Directory Service for existing roles.
  3. I will now assign the on-premises DevOps and BIMgrs groups to the EC2FullAccess role. To do so, I choose the EC2FullAccess IAM role link to navigate to the Role Detail page. Next, I choose the Add button to assign users or groups to the role, as shown in the following screenshot.
  4. In the Add Users and Groups to Role pop-up window, I select the on-premises Active Directory forest that contains the users and groups to assign. In this example, that forest is amazondomains.comNote: If you do not use a trust to an on-premises AD and you create users and groups in your AWS Microsoft AD directory, you can choose the default this forest to search for users in Microsoft AD.
  5. To assign an Active Directory group, choose the Group filter above the Search for field. Type the name of the Active Directory group in the search box and choose the search button (the magnifying glass). You can see that I was able to search for the DevOps group from my on-premises Active Directory.
  6. In this case, I added the on-premises groups, DevOps and BIMgrs, to the EC2FullAccess role. When finished, choose the Add button to assign users and groups to the IAM role. You have now successfully granted DevOps and BIMgrs on-premises AD groups full access to EC2. Users in these AD groups can now sign in to AWS Management Console using their on-premises credentials and manage EC2 instances.

From the Add Users and Groups to Roles page, I repeat the process to assign the remaining groups to the IAM roles. In the following screenshot, you can see that I have assigned the DevOps group to three roles and the BIMgrs group to only one role.

With my AD security groups assigned to my IAM roles, I can now add and delete on-premises users to the security groups to grant or revoke permissions to the IAM roles. Users in these security groups have access to all of their assigned roles.

  1. You can optionally set the login session length for your AWS Microsoft AD directory. The default length is 1 hour, but you can increase it up to 12 hours. In my example, I set the console session time to 240 minutes (4 hours).

Step 4 – Connect to the AWS Management Console

I am now ready for my users to sign in to the AWS Management Console with their on-premises credentials. I emailed my users the access URL I created in Step 2: https://example-corp.awsapps.com/console. Now my users can go to the URL to sign in to the AWS Management Console.

When Mary, who is a member of DevOps group, goes to the access URL, she sees a sign-in page to connect to the AWS Management Console. In the Username box, she can enter her sign-in name in three different ways:

Because the DevOps group is associated with three IAM roles, and because Mary is in the DevOps group, she can choose the role she wants from the list presented after she successfully logs in. The following screenshot shows this step.

If you also would like to secure the AWS Management Console with multi-factor authentication (MFA), you can add MFA to your AWS Microsoft AD configuration. To learn more about enabling MFA on Microsoft AD, see How to Enable Multi-Factor Authentication for AWS Services by Using AWS Microsoft AD and On-Premises Credentials.

Summary

AWS Microsoft AD makes it easier for you to connect to the AWS Management Console by using your on-premises credentials. It also enables you to reuse your on-premises AD security policies such as password expiration, password history, and account lockout policies while still controlling access to AWS resources.

To learn more about Directory Service, see the AWS Directory Service home page. If you have questions about this blog post, please start a new thread on the Directory Service forum.

– Vijay

AWS Quick Starts Update – Tableau, Splunk, Compliance, Alfresco, Symantec

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-quick-starts-update-tableau-splunk-compliance-alfresco-symantec/

AWS Quick Starts help you to deploy popular solutions on AWS. Each Quick Start is designed by AWS solutions architects or partners, and makes use of AWS best practices for security and high availability. You can use them to spin up test or production environments that you can use right away.

The Quick Starts include comprehensive deployment guides and AWS CloudFormation templates that you can launch with a single click. The collection of Quick Starts is broken down in to seven categories, as follows:

  • DevOps
  • Databases & storage
  • Big Data & analytics
  • Security & compliance
  • Microsoft & SAP
  • Networking & access
  • Additional

Over the past two months we have added six new Quick Starts to our collection, bringing the total up to 42. Today I would like to give you an overview of the newest Quick Starts in each category.

Tableau Server (Big data & analytics)
The Tableau Server on AWS Quick Start helps you to deploy a fully functional Tableau Server on the AWS Cloud. You can launch a single node deployment in your default VPC, or a multi-node cluster deployment in a new or existing VPC. Here’s the cluster architecture:

The CloudFormation template will prompt you for (among other things) your Tableau Activation Key.

Splunk Enterprise (Big data & analytics)
The Splunk Enterprise on AWS Quick Start helps you to deploy a distributed Splunk Enterprise environment on the AWS Cloud. You can launch into an existing VPC with two or more Availability Zones or you can create a new VPC. Here’s the architecture:

The template will prompt you for the name of an S3 bucket and the path (within the bucket) to a Splunk license file.

UK OFFICIAL (Security & compliance)
The UK-OFFICIAL on AWS Quick Start sets up a standardized AWS Cloud environment that supports workloads that are classified as United Kingdom (UK) OFFICIAL. The environment aligns with the in-scope guidelines found in the NCSC Cloud Security Principles and the CIS Critical Security Controls (take a look at the security controls matrix to learn more). Here’s the architecture:

Alfresco One
The Alfresco One on AWS Quick Start helps you to deploy an Alfresco One Enterprise Content Management server cluster in the AWS Cloud. It can be deployed into an existing VPC, or it can set up a new one with public and private subnets. Here’s the architecture:

You will need to have an Alfresco trial license in order to launch the cluster.

Symantec Protection Engine (Security & compliance)
The Symantec Protection Engine on AWS Quick Start helps you to deploy Symantec Protection Engine (SPE) in less than an hour. Once deployed (into a new or existing VPC), you can use SPE’s APIs to incorporate malware and threat detection into your applications. You can also connect it to proxies and scan traffic for viruses, trojans, and other types of malware. Here’s the architecture:

You will need to purchase an SPE license or subscribe to the SPE AMI in order to use this Quick Start.

For More Info
To learn more about our Quick Starts, check out the Quick Starts FAQ. If you are interested in authoring a Quick Start of your own, read our Quick Starts Contributor’s Guide.

Jeff;

 

Amazon ECS Events in February

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/amazon-ecs-events-in-february/

Here are some upcoming events for Amazon ECS this month:

Container World: Abby Fuller, senior AWS technical evangelist, will be speaking about Amazon ECS at Container World on Feb 21-23. Check out her schedule.

Microservices Day @ AWS NY Loft: Microservices Day is on Feb 24 as part of the DevOps | AWS Loft Architecture Week. Learn more about how to build and deploy microservices architectures on AWS. We will cover how to use Amazon ECS and AWS Lambda to build microservices. Signup here.

Seattle AWS Architects & Engineers Meetup: Join us Feb 28 at SURF Incubator to learn more about AWS Batch and Amazon ECS. Food and drinks provided. RSVP here.