Tag Archives: instances

Catching Up on Some Recent AWS Launches and Publications

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/catching-up-on-some-recent-aws-launches-and-publications/

As I have noted in the past, the AWS Blog Team is working hard to make sure that you know about as many AWS launches and publications as possible, without totally burying you in content! As part of our balancing act, we will occasionally publish catch-up posts to clear our queues and to bring more information to your attention. Here’s what I have in store for you today:

  • Monitoring for Cross-Region Replication of S3 Objects
  • Tags for Spot Fleet Instances
  • PCI DSS Compliance for 12 More Services
  • HIPAA Eligibility for WorkDocs
  • VPC Resizing
  • AppStream 2.0 Graphics Design Instances
  • AMS Connector App for ServiceNow
  • Regtech in the Cloud
  • New & Revised Quick Starts

Let’s jump right in!

Monitoring for Cross-Region Replication of S3 Objects
I told you about cross-region replication for S3 a couple of years ago. As I showed you at the time, you simply enable versioning for the source bucket and then choose a destination region and bucket. You can check the replication status manually, or you can create an inventory (daily or weekly) of the source and destination buckets.

The Cross-Region Replication Monitor (CRR Monitor for short) solution checks the replication status of objects across regions and gives you metrics and failure notifications in near real-time.

To learn more, read the CRR Monitor Implementation Guide and then use the AWS CloudFormation template to Deploy the CRR Monitor.

Tags for Spot Instances
Spot Instances and Spot Fleets (collections of Spot Instances) give you access to spare compute capacity. We recently gave you the ability to enter tags (key/value pairs) as part of your spot requests and to have those tags applied to the EC2 instances launched to fulfill the request:

To learn more, read Tag Your Spot Fleet EC2 Instances.

PCI DSS Compliance for 12 More Services
As first announced on the AWS Security Blog, we recently added 12 more services to our PCI DSS compliance program, raising the total number of in-scope services to 42. To learn more, check out our Compliance Resources.

HIPAA Eligibility for WorkDocs
In other compliance news, we announced that Amazon WorkDocs has achieved HIPAA eligibility and PCI DSS compliance in all AWS Regions where WorkDocs is available.

VPC Resizing
This feature allows you to extend an existing Virtual Private Cloud (VPC) by adding additional blocks of addresses. This gives you more flexibility and should help you to deal with growth. You can add up to four secondary /16 CIDRs per VPC. You can also edit the secondary CIDRs by deleting them and adding new ones. Simply select the VPC and choose Edit CIDRs from the menu:

Then add or remove CIDR blocks as desired:

To learn more, read about VPCs and Subnets.

AppStream 2.0 Graphics Design Instances
Powered by AMD FirePro S7150x2 Server GPUs and equipped with AMD Multiuser GPU technology, the new Graphics Design instances for Amazon AppStream 2.0 will let you run and stream graphics applications more cost-effectively than ever. The instances are available in four sizes, with 2-16 vCPUs and 7.5 GB to 61 GB of memory.

To learn more, read Introducing Amazon AppStream 2.0 Graphics Design, a New Lower Costs Instance Type for Streaming Graphics Applications.

AMS Connector App for ServiceNow
AWS Managed Services (AMS) provides Infrastructure Operations Management for the Enterprise. Designed to accelerate cloud adoption, it automates common operations such as change requests, patch management, security and backup.

The new AMS integration App for ServiceNow lets you interact with AMS from within ServiceNow, with no need for any custom development or API integration.

To learn more, read Cloud Management Made Easier: AWS Managed Services Now Integrates with ServiceNow.

Regtech in the Cloud
Regtech (as I learned while writing this), is short for regulatory technology, and is all about using innovative technology such as cloud computing, analytics, and machine learning to address regulatory challenges.

Working together with APN Consulting Partner Cognizant, TABB Group recently published a thought leadership paper that explains why regulations and compliance pose huge challenges for our customers in the financial services, and shows how AWS can help!

New & Revised Quick Starts
Our Quick Starts team has been cranking out new solutions and making significant updates to the existing ones. Here’s a roster:

Alfresco Content Services (v2) Atlassian Confluence Confluent Platform Data Lake
Datastax Enterprise GitHub Enterprise Hashicorp Nomad HIPAA
Hybrid Data Lake with Wandisco Fusion IBM MQ IBM Spectrum Scale Informatica EIC
Magento (v2) Linux Bastion (v2) Modern Data Warehouse with Tableau MongoDB (v2)
NetApp ONTAP NGINX (v2) RD Gateway Red Hat Openshift
SAS Grid SIOS Datakeeper StorReduce SQL Server (v2)

And that’s all I have for today!

Jeff;

New – Stop & Resume Workloads on EC2 Spot Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-stop-resume-workloads-on-ec2-spot-instances/

EC2 Spot Instances give you access to spare EC2 compute capacity at up to 90% off of the On-Demand rates. Starting with the ability to request a specific number of instances of a particular size, we made Spot Instances even more useful and flexible with support for Spot Fleets and Auto Scaling Spot Fleets, allowing you to maintain any desired level of compute capacity.

EC2 users have long had the ability to stop running instances while leaving EBS volumes attached, opening the door to applications that automatically pick up where they left off when the instance starts running again.

Stop and Resume Spot Workloads
Today we are blending these two important features, allowing you to set up Spot bids and Spot Fleets that respond by stopping (rather than terminating) instances when capacity is no longer available at or below your bid price. EBS volumes attached to stopped instances remain intact, as does the EBS-backed root volume. When capacity becomes available, the instances are started and can keep on going without having to spend time provisioning applications, setting up EBS volumes, downloading data, joining network domains, and so forth.

Many AWS customers have enhanced their applications to create and make use of checkpoints, adding some resilience and gaining the ability to take advantage of EC2’s start/stop feature in the process. These customers will now be able to run these applications on Spot Instances, with savings that average 70-90%.

While the instances are stopped, you can modify the EBS Optimization, User data, Ramdisk ID, and Delete on Termination attributes. Stopped Spot Instances do not incur any charges for compute time; space for attached EBS volumes is charged at the usual rates.

Here’s how you create a Spot bid or Spot Fleet and specify the use of stop/start:

Things to Know
This feature is available now and you can start using it today in all AWS Regions where Spot Instances are available. It is designed to work well in conjunction with the new per-second billing for EC2 instances and EBS volumes, with the potential for another dimension of cost savings over and above that provided by Spot Instances.

EBS volumes always exist within a particular Availability Zone (AZ). As a result, Spot and Spot Fleet requests that specify a particular AZ will always restart in that AZ.

Take care when using this feature in conjunction with Spot Fleets that have the potential to span a wide variety of instance types. Because the composition of the fleet can change over time, you need to pay attention to your account’s limits for IP addresses and EBS volumes.

I’m looking forward to hearing about the new and creative uses that you’ll come up with for this feature. If you thought that your application was not a good fit for Spot Instances, or if the overhead needed to handle interruptions was too high, it is time to take another look!

Jeff;

 

Using AWS CodePipeline, AWS CodeBuild, and AWS Lambda for Serverless Automated UI Testing

Post Syndicated from Prakash Palanisamy original https://aws.amazon.com/blogs/devops/using-aws-codepipeline-aws-codebuild-and-aws-lambda-for-serverless-automated-ui-testing/

Testing the user interface of a web application is an important part of the development lifecycle. In this post, I’ll explain how to automate UI testing using serverless technologies, including AWS CodePipeline, AWS CodeBuild, and AWS Lambda.

I built a website for UI testing that is hosted in S3. I used Selenium to perform cross-browser UI testing on Chrome, Firefox, and PhantomJS, a headless WebKit browser with Ghost Driver, an implementation of the WebDriver Wire Protocol. I used Python to create test cases for ChromeDriver, FirefoxDriver, or PhatomJSDriver based the browser against which the test is being executed.

Resources referred to in this post, including the AWS CloudFormation template, test and status websites hosted in S3, AWS CodeBuild build specification files, AWS Lambda function, and the Python script that performs the test are available in the serverless-automated-ui-testing GitHub repository.

S3 Hosted Test Website:

AWS CodeBuild supports custom containers so we can use the Selenium/standalone-Firefox and Selenium/standalone-Chrome containers, which include prebuild Firefox and Chrome browsers, respectively. Xvfb performs the graphical operation in virtual memory without any display hardware. It will be installed in the CodeBuild containers during the install phase.

Build Spec for Chrome and Firefox

The build specification for Chrome and Firefox testing includes multiple phases:

  • The environment variables section contains a set of default variables that are overridden while creating the build project or triggering the build.
  • As part of install phase, required packages like Xvfb and Selenium are installed using yum.
  • During the pre_build phase, the test bed is prepared for test execution.
  • During the build phase, the appropriate DISPLAY is set and the tests are executed.
version: 0.2

env:
  variables:
    BROWSER: "chrome"
    WebURL: "https://sampletestweb.s3-eu-west-1.amazonaws.com/website/index.html"
    ArtifactBucket: "codebuild-demo-artifact-repository"
    MODULES: "mod1"
    ModuleTable: "test-modules"
    StatusTable: "blog-test-status"

phases:
  install:
    commands:
      - apt-get update
      - apt-get -y upgrade
      - apt-get install xvfb python python-pip build-essential -y
      - pip install --upgrade pip
      - pip install selenium
      - pip install awscli
      - pip install requests
      - pip install boto3
      - cp xvfb.init /etc/init.d/xvfb
      - chmod +x /etc/init.d/xvfb
      - update-rc.d xvfb defaults
      - service xvfb start
      - export PATH="$PATH:`pwd`/webdrivers"
  pre_build:
    commands:
      - python prepare_test.py
  build:
    commands:
      - export DISPLAY=:5
      - cd tests
      - echo "Executing simple test..."
      - python testsuite.py

Because Ghost Driver runs headless, it can be executed on AWS Lambda. In keeping with a fire-and-forget model, I used CodeBuild to create the PhantomJS Lambda function and trigger the test invocations on Lambda in parallel. This is powerful because many tests can be executed in parallel on Lambda.

Build Spec for PhantomJS

The build specification for PhantomJS testing also includes multiple phases. It is a little different from the preceding example because we are using AWS Lambda for the test execution.

  • The environment variables section contains a set of default variables that are overridden while creating the build project or triggering the build.
  • As part of install phase, the required packages like Selenium and the AWS CLI are installed using yum.
  • During the pre_build phase, the test bed is prepared for test execution.
  • During the build phase, a zip file that will be used to create the PhantomJS Lambda function is created and tests are executed on the Lambda function.
version: 0.2

env:
  variables:
    BROWSER: "phantomjs"
    WebURL: "https://sampletestweb.s3-eu-west-1.amazonaws.com/website/index.html"
    ArtifactBucket: "codebuild-demo-artifact-repository"
    MODULES: "mod1"
    ModuleTable: "test-modules"
    StatusTable: "blog-test-status"
    LambdaRole: "arn:aws:iam::account-id:role/role-name"

phases:
  install:
    commands:
      - apt-get update
      - apt-get -y upgrade
      - apt-get install python python-pip build-essential -y
      - apt-get install zip unzip -y
      - pip install --upgrade pip
      - pip install selenium
      - pip install awscli
      - pip install requests
      - pip install boto3
  pre_build:
    commands:
      - python prepare_test.py
  build:
    commands:
      - cd lambda_function
      - echo "Packaging Lambda Function..."
      - zip -r /tmp/lambda_function.zip ./*
      - func_name=`echo $CODEBUILD_BUILD_ID | awk -F ':' '{print $1}'`-phantomjs
      - echo "Creating Lambda Function..."
      - chmod 777 phantomjs
      - |
         func_list=`aws lambda list-functions | grep FunctionName | awk -F':' '{print $2}' | tr -d ', "'`
         if echo "$func_list" | grep -qw $func_name
         then
             echo "Lambda function already exists."
         else
             aws lambda create-function --function-name $func_name --runtime "python2.7" --role $LambdaRole --handler "testsuite.lambda_handler" --zip-file fileb:///tmp/lambda_function.zip --timeout 150 --memory-size 1024 --environment Variables="{WebURL=$WebURL, StatusTable=$StatusTable}" --tags Name=$func_name
         fi
      - export PhantomJSFunction=$func_name
      - cd ../tests/
      - python testsuite.py

The list of test cases and the test modules that belong to each case are stored in an Amazon DynamoDB table. Based on the list of modules passed as an argument to the CodeBuild project, CodeBuild gets the test cases from that table and executes them. The test execution status and results are stored in another Amazon DynamoDB table. It will read the test status from the status table in DynamoDB and display it.

AWS CodeBuild and AWS Lambda perform the test execution as individual tasks. AWS CodePipeline plays an important role here by enabling continuous delivery and parallel execution of tests for optimized testing.

Here’s how to do it:

In AWS CodePipeline, create a pipeline with four stages:

  • Source (AWS CodeCommit)
  • UI testing (AWS Lambda and AWS CodeBuild)
  • Approval (manual approval)
  • Production (AWS Lambda)

Pipeline stages, the actions in each stage, and transitions between stages are shown in the following diagram.

This design implemented in AWS CodePipeline looks like this:

CodePipeline automatically detects a change in the source repository and triggers the execution of the pipeline.

In the UITest stage, there are two parallel actions:

  • DeployTestWebsite invokes a Lambda function to deploy the test website in S3 as an S3 website.
  • DeployStatusPage invokes another Lambda function to deploy in parallel the status website in S3 as an S3 website.

Next, there are three parallel actions that trigger the CodeBuild project:

  • TestOnChrome launches a container to perform the Selenium tests on Chrome.
  • TestOnFirefox launches another container to perform the Selenium tests on Firefox.
  • TestOnPhantomJS creates a Lambda function and invokes individual Lambda functions per test case to execute the test cases in parallel.

You can monitor the status of the test execution on the status website, as shown here:

When the UI testing is completed successfully, the pipeline continues to an Approval stage in which a notification is sent to the configured SNS topic. The designated team member reviews the test status and approves or rejects the deployment. Upon approval, the pipeline continues to the Production stage, where it invokes a Lambda function and deploys the website to a production S3 bucket.

I used a CloudFormation template to set up my continuous delivery pipeline. The automated-ui-testing.yaml template, available from GitHub, sets up a full-featured pipeline.

When I use the template to create my pipeline, I specify the following:

  • AWS CodeCommit repository.
  • SNS topic to send approval notification.
  • S3 bucket name where the artifacts will be stored.

The stack name should follow the rules for S3 bucket naming because it will be part of the S3 bucket name.

When the stack is created successfully, the URLs for the test website and status website appear in the Outputs section, as shown here:

Conclusion

In this post, I showed how you can use AWS CodePipeline, AWS CodeBuild, AWS Lambda, and a manual approval process to create a continuous delivery pipeline for serverless automated UI testing. Websites running on Amazon EC2 instances or AWS Elastic Beanstalk can also be tested using similar approach.


About the author

Prakash Palanisamy is a Solutions Architect for Amazon Web Services. When he is not working on Serverless, DevOps or Alexa, he will be solving problems in Project Euler. He also enjoys watching educational documentaries.

Automating Amazon EBS Snapshot Management with AWS Step Functions and Amazon CloudWatch Events

Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/automating-amazon-ebs-snapshot-management-with-aws-step-functions-and-amazon-cloudwatch-events/

Brittany Doncaster, Solutions Architect

Business continuity is important for building mission-critical workloads on AWS. As an AWS customer, you might define recovery point objectives (RPO) and recovery time objectives (RTO) for different tier applications in your business. After the RPO and RTO requirements are defined, it is up to your architects to determine how to meet those requirements.

You probably store persistent data in Amazon EBS volumes, which live within a single Availability Zone. And, following best practices, you take snapshots of your EBS volumes to back up the data on Amazon S3, which provides 11 9’s of durability. If you are following these best practices, then you’ve probably recognized the need to manage the number of snapshots you keep for a particular EBS volume and delete older, unneeded snapshots. Doing this cleanup helps save on storage costs.

Some customers also have policies stating that backups need to be stored a certain number of miles away as part of a disaster recovery (DR) plan. To meet these requirements, customers copy their EBS snapshots to the DR region. Then, the same snapshot management and cleanup has to also be done in the DR region.

All of this snapshot management logic consists of different components. You would first tag your snapshots so you could manage them. Then, determine how many snapshots you currently have for a particular EBS volume and assess that value against a retention rule. If the number of snapshots was greater than your retention value, then you would clean up old snapshots. And finally, you might copy the latest snapshot to your DR region. All these steps are just an example of a simple snapshot management workflow. But how do you automate something like this in AWS? How do you do it without servers?

One of the most powerful AWS services released in 2016 was Amazon CloudWatch Events. It enables you to build event-driven IT automation, based on events happening within your AWS infrastructure. CloudWatch Events integrates with AWS Lambda to let you execute your custom code when one of those events occurs. However, the actions to take based on those events aren’t always composed of a single Lambda function. Instead, your business logic may consist of multiple steps (like in the case of the example snapshot management flow described earlier). And you may want to run those steps in sequence or in parallel. You may also want to have retry logic or exception handling for each step.

AWS Step Functions serves just this purpose―to help you coordinate your functions and microservices. Step Functions enables you to simplify your effort and pull the error handling, retry logic, and workflow logic out of your Lambda code. Step Functions integrates with Lambda to provide a mechanism for building complex serverless applications. Now, you can kick off a Step Functions state machine based on a CloudWatch event.

In this post, I discuss how you can target Step Functions in a CloudWatch Events rule. This allows you to have event-driven snapshot management based on snapshot completion events firing in CloudWatch Event rules.

As an example of what you could do with Step Functions and CloudWatch Events, we’ve developed a reference architecture that performs management of your EBS snapshots.

Automating EBS Snapshot Management with Step Functions

This architecture assumes that you have already set up CloudWatch Events to create the snapshots on a schedule or that you are using some other means of creating snapshots according to your needs.

This architecture covers the pieces of the workflow that need to happen after a snapshot has been created.

  • It creates a CloudWatch Events rule to invoke a Step Functions state machine execution when an EBS snapshot is created.
  • The state machine then tags the snapshot, cleans up the oldest snapshots if the number of snapshots is greater than the defined number to retain, and copies the snapshot to a DR region.
  • When the DR region snapshot copy is completed, another state machine kicks off in the DR region. The new state machine has a similar flow and uses some of the same Lambda code to clean up the oldest snapshots that are greater than the defined number to retain.
  • Also, both state machines demonstrate how you can use Step Functions to handle errors within your workflow. Any errors that are caught during execution result in the execution of a Lambda function that writes a message to an SNS topic. Therefore, if any errors occur, you can subscribe to the SNS topic and get notified.

The following is an architecture diagram of the reference architecture:

Creating the Lambda functions and Step Functions state machines

First, pull the code from GitHub and use the AWS CLI to create S3 buckets for the Lambda code in the primary and DR regions. For this example, assume that the primary region is us-west-2 and the DR region is us-east-2. Run the following commands, replacing the italicized text in <> with your own unique bucket names.

git clone https://github.com/awslabs/aws-step-functions-ebs-snapshot-mgmt.git

cd aws-step-functions-ebs-snapshot-mgmt/

aws s3 mb s3://<primary region bucket name> --region us-west-2

aws s3 mb s3://<DR region bucket name> --region us-east-2

Next, use the Serverless Application Model (SAM), which uses AWS CloudFormation to deploy the Lambda functions and Step Functions state machines in the primary and DR regions. Replace the italicized text in <> with the S3 bucket names that you created earlier.

aws cloudformation package --template-file PrimaryRegionTemplate.yaml --s3-bucket <primary region bucket name>  --output-template-file tempPrimary.yaml --region us-west-2

aws cloudformation deploy --template-file tempPrimary.yaml --stack-name ebsSnapshotMgmtPrimary --capabilities CAPABILITY_IAM --region us-west-2

aws cloudformation package --template-file DR_RegionTemplate.yaml --s3-bucket <DR region bucket name> --output-template-file tempDR.yaml  --region us-east-2

aws cloudformation deploy --template-file tempDR.yaml --stack-name ebsSnapshotMgmtDR --capabilities CAPABILITY_IAM --region us-east-2

CloudWatch event rule verification

The CloudFormation templates deploy the following resources:

  • The Lambda functions that are coordinated by Step Functions
  • The Step Functions state machine
  • The SNS topic
  • The CloudWatch Events rules that trigger the state machine execution

So, all of the CloudWatch event rules have been created for you by performing the preceding commands. The next section demonstrates how you could create the CloudWatch event rule manually. To jump straight to testing the workflow, see the “Testing in your Account” section. Otherwise, you begin by setting up the CloudWatch event rule in the primary region for the createSnapshot event and also the CloudWatch event rule in the DR region for the copySnapshot command.

First, open the CloudWatch console in the primary region.

Choose Create Rule and create a rule for the createSnapshot command, with your newly created Step Function state machine as the target.

For Event Source, choose Event Pattern and specify the following values:

  • Service Name: EC2
  • Event Type: EBS Snapshot Notification
  • Specific Event: createSnapshot

For Target, choose Step Functions state machine, then choose the state machine created by the CloudFormation commands. Choose Create a new role for this specific resource. Your completed rule should look like the following:

Choose Configure Details and give the rule a name and description.

Choose Create Rule. You now have a CloudWatch Events rule that triggers a Step Functions state machine execution when the EBS snapshot creation is complete.

Now, set up the CloudWatch Events rule in the DR region as well. This looks almost same, but is based off the copySnapshot event instead of createSnapshot.

In the upper right corner in the console, switch to your DR region. Choose CloudWatch, Create Rule.

For Event Source, choose Event Pattern and specify the following values:

  • Service Name: EC2
  • Event Type: EBS Snapshot Notification
  • Specific Event: copySnapshot

For Target, choose Step Functions state machine, then select the state machine created by the CloudFormation commands. Choose Create a new role for this specific resource. Your completed rule should look like in the following:

As in the primary region, choose Configure Details and then give this rule a name and description. Complete the creation of the rule.

Testing in your account

To test this setup, open the EC2 console and choose Volumes. Select a volume to snapshot. Choose Actions, Create Snapshot, and then create a snapshot.

This results in a new execution of your state machine in the primary and DR regions. You can view these executions by going to the Step Functions console and selecting your state machine.

From there, you can see the execution of the state machine.

Primary region state machine:

DR region state machine:

I’ve also provided CloudFormation templates that perform all the earlier setup without using git clone and running the CloudFormation commands. Choose the Launch Stack buttons below to launch the primary and DR region stacks in Dublin and Ohio, respectively. From there, you can pick up at the Testing in Your Account section above to finish the example. All of the code for this example architecture is located in the aws-step-functions-ebs-snapshot-mgmt AWSLabs repo.

Launch EBS Snapshot Management into Ireland with CloudFormation
Primary Region eu-west-1 (Ireland)

Launch EBS Snapshot Management into Ohio with CloudFormation
DR Region us-east-2 (Ohio)

Summary

This reference architecture is just an example of how you can use Step Functions and CloudWatch Events to build event-driven IT automation. The possibilities are endless:

  • Use this pattern to perform other common cleanup type jobs such as managing Amazon RDS snapshots, old versions of Lambda functions, or old Amazon ECR images—all triggered by scheduled events.
  • Use Trusted Advisor events to identify unused EC2 instances or EBS volumes, then coordinate actions on them, such as alerting owners, stopping, or snapshotting.

Happy coding and please let me know what useful state machines you build!

New – Per-Second Billing for EC2 Instances and EBS Volumes

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-per-second-billing-for-ec2-instances-and-ebs-volumes/

Back in the old days, you needed to buy or lease a server if you needed access to compute power. When we launched EC2 back in 2006, the ability to use an instance for an hour, and to pay only for that hour, was big news. The pay-as-you-go model inspired our customers to think about new ways to develop, test, and run applications of all types.

Today, services like AWS Lambda prove that we can do a lot of useful work in a short time. Many of our customers are dreaming up applications for EC2 that can make good use of a large number of instances for shorter amounts of time, sometimes just a few minutes.

Per-Second Billing for EC2 and EBS
Effective October 2nd, usage of Linux instances that are launched in On-Demand, Reserved, and Spot form will be billed in one-second increments. Similarly, provisioned storage for EBS volumes will be billed in one-second increments.

Per-second billing also applies to Amazon EMR and AWS Batch:

Amazon EMR – Our customers add capacity to their EMR clusters in order to get their results more quickly. With per-second billing for the EC2 instances in the clusters, adding nodes is more cost-effective than ever.

AWS Batch – Many of the batch jobs that our customers run complete in less than an hour. AWS Batch already launches and terminates Spot Instances; with per-second billing batch processing will become even more economical.

Some of our more sophisticated customers have built systems to get the most value from EC2 by strategically choosing the most advantageous target instances when managing their gaming, ad tech, or 3D rendering fleets. Per-second billing obviates the need for this extra layer of instance management, and brings the costs savings to all customers and all workloads.

While this will result in a price reduction for many workloads (and you know we love price reductions), I don’t think that’s the most important aspect of this change. I believe that this change will inspire you to innovate and to think about your compute-bound problems in new ways. How can you use it to improve your support for continuous integration? Can it change the way that you provision transient environments for your dev and test workloads? What about your analytics, batch processing, and 3D rendering?

One of the many advantages of cloud computing is the elastic nature of provisioning or deprovisioning resources as you need them. By billing usage down to the second we will enable customers to level up their elasticity, save money, and customers will be positioned to take advantage of continuing advances in computing.

Things to Know
This change is effective in all AWS Regions and will be effective October 2, for all Linux instances that are newly launched or already running. Per-second billing is not currently applicable to instances running Microsoft Windows or Linux distributions that have a separate hourly charge. There is a 1 minute minimum charge per-instance.

List prices and Spot Market prices are still listed on a per-hour basis, but bills are calculated down to the second, as is Reserved Instance usage (you can launch, use, and terminate multiple instances within an hour and get the Reserved Instance Benefit for all of the instances). Also, bills will show times in decimal form, like this:

The Dedicated Per Region Fee, EBS Snapshots, and products in AWS Marketplace are still billed on an hourly basis.

Jeff;

 

AWS IAM Policy Summaries Now Help You Identify Errors and Correct Permissions in Your IAM Policies

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/iam-policy-summaries-now-help-you-identify-errors-and-correct-permissions-in-your-iam-policies/

In March, we made it easier to view and understand the permissions in your AWS Identity and Access Management (IAM) policies by using IAM policy summaries. Today, we updated policy summaries to help you identify and correct errors in your IAM policies. When you set permissions using IAM policies, for each action you specify, you must match that action to supported resources or conditions. Now, you will see a warning if these policy elements (Actions, Resources, and Conditions) defined in your IAM policy do not match.

When working with policies, you may find that although the policy has valid JSON syntax, it does not grant or deny the desired permissions because the Action element does not have an applicable Resource element or Condition element defined in the policy. For example, you may want to create a policy that allows users to view a specific Amazon EC2 instance. To do this, you create a policy that specifies ec2:DescribeInstances for the Action element and the Amazon Resource Name (ARN) of the instance for the Resource element. When testing this policy, you find AWS denies this access because ec2:DescribeInstances does not support resource-level permissions and requires access to list all instances. Therefore, to grant access to this Action element, you need to specify a wildcard (*) in the Resource element of your policy for this Action element in order for the policy to function correctly.

To help you identify and correct permissions, you will now see a warning in a policy summary if the policy has either of the following:

  • An action that does not support the resource specified in a policy.
  • An action that does not support the condition specified in a policy.

In this blog post, I walk through two examples of how you can use policy summaries to help identify and correct these types of errors in your IAM policies.

How to use IAM policy summaries to debug your policies

Example 1: An action does not support the resource specified in a policy

Let’s say a human resources (HR) representative, Casey, needs access to the personnel files stored in HR’s Amazon S3 bucket. To do this, I create the following policy to grant all actions that begin with s3:List. In addition, I grant access to s3:GetObject in the Action element of the policy. To ensure that Casey has access only to a specific bucket and not others, I specify the bucket ARN in the Resource element of the policy.

Note: This policy does not grant the desired permissions.

This policy does not work. Do not copy.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ThisPolicyDoesNotGrantAllListandGetActions",
            "Effect": "Allow",
            "Action": ["s3:List*",
                       "s3:GetObject"],
            "Resource": ["arn:aws:s3:::HumanResources"]
        }
    ]
}

After I create the policy, HRBucketPermissions, I select this policy from the Policies page to view the policy summary. From here, I check to see if there are any warnings or typos in the policy. I see a warning at the top of the policy detail page because the policy does not grant some permissions specified in the policy, which is caused by a mismatch among the actions, resources, or conditions.

Screenshot showing the warning at the top of the policy

To view more details about the warning, I choose Show remaining so that I can understand why the permissions do not appear in the policy summary. As shown in the following screenshot, I see no access to the services that are not granted by the IAM policy in the policy, which is expected. However, next to S3, I see a warning that one or more S3 actions do not have an applicable resource.

Screenshot showing that one or more S3 actions do not have an applicable resource

To understand why the specific actions do not have a supported resource, I choose S3 from the list of services and choose Show remaining. I type List in the filter to understand why some of the list actions are not granted by the policy. As shown in the following screenshot, I see these warnings:

  • This action does not support resource-level permissions. This means the action does not support resource-level permissions and requires a wildcard (*) in the Resource element of the policy.
  • This action does not have an applicable resource. This means the action supports resource-level permissions, but not the resource type defined in the policy. In this example, I specified an S3 bucket for an action that supports only an S3 object resource type.

From these warnings, I see that s3:ListAllMyBuckets, s3:ListBucketMultipartUploadsParts3:ListObjects , and s3:GetObject do not support an S3 bucket resource type, which results in Casey not having access to the S3 bucket. To correct the policy, I choose Edit policy and update the policy with three statements based on the resource that the S3 actions support. Because Casey needs access to view and read all of the objects in the HumanResources bucket, I add a wildcard (*) for the S3 object path in the Resource ARN.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "TheseActionsSupportBucketResourceType",
            "Effect": "Allow",
            "Action": ["s3:ListBucket",
                       "s3:ListBucketByTags",
                       "s3:ListBucketMultipartUploads",
                       "s3:ListBucketVersions"],
            "Resource": ["arn:aws:s3:::HumanResources"]
        },{
            "Sid": "TheseActionsRequireAllResources",
            "Effect": "Allow",
            "Action": ["s3:ListAllMyBuckets",
                       "s3:ListMultipartUploadParts",
                       "s3:ListObjects"],
            "Resource": [ "*"]
        },{
            "Sid": "TheseActionsRequireSupportsObjectResourceType",
            "Effect": "Allow",
            "Action": ["s3:GetObject"],
            "Resource": ["arn:aws:s3:::HumanResources/*"]
        }
    ]
}

After I make these changes, I see the updated policy summary and see that warnings are no longer displayed.

Screenshot of the updated policy summary that no longer shows warnings

In the previous example, I showed how to identify and correct permissions errors that include actions that do not support a specified resource. In the next example, I show how to use policy summaries to identify and correct a policy that includes actions that do not support a specified condition.

Example 2: An action does not support the condition specified in a policy

For this example, let’s assume Bob is a project manager who requires view and read access to all the code builds for his team. To grant him this access, I create the following JSON policy that specifies all list and read actions to AWS CodeBuild and defines a condition to limit access to resources in the us-west-2 Region in which Bob’s team develops.

This policy does not work. Do not copy. 
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListReadAccesstoCodeServices",
            "Effect": "Allow",
            "Action": [
                "codebuild:List*",
                "codebuild:BatchGet*"
            ],
            "Resource": ["*"], 
             "Condition": {
                "StringEquals": {
                    "ec2:Region": "us-west-2"
                }
            }
        }
    ]	
}

After I create the policy, PMCodeBuildAccess, I select this policy from the Policies page to view the policy summary in the IAM console. From here, I check to see if the policy has any warnings or typos. I see an error at the top of the policy detail page because the policy does not grant any permissions.

Screenshot with an error showing the policy does not grant any permissions

To view more details about the error, I choose Show remaining to understand why no permissions result from the policy. I see this warning: One or more conditions do not have an applicable action. This means that the condition is not supported by any of the actions defined in the policy.

From the warning message (see preceding screenshot), I realize that ec2:Region is not a supported condition for any actions in CodeBuild. To correct the policy, I separate the list actions that do not support resource-level permissions into a separate Statement element and specify * as the resource. For the remaining CodeBuild actions that support resource-level permissions, I use the ARN to specify the us-west-2 Region in the project resource type.

CORRECT POLICY 
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "TheseActionsSupportAllResources",
            "Effect": "Allow",
            "Action": [
                "codebuild:ListBuilds",
                "codebuild:ListProjects",
                "codebuild:ListRepositories",
                "codebuild:ListCuratedEnvironmentImages",
                "codebuild:ListConnectedOAuthAccounts"
            ],
            "Resource": ["*"] 
        }, {
            "Sid": "TheseActionsSupportAResource",
            "Effect": "Allow",
            "Action": [
                "codebuild:ListBuildsForProject",
                "codebuild:BatchGet*"
            ],
            "Resource": ["arn:aws:codebuild:us-west-2:123456789012:project/*"] 
        }

    ]	
}

After I make the changes, I view the updated policy summary and see that no warnings are displayed.

Screenshot showing the updated policy summary with no warnings

When I choose CodeBuild from the list of services, I also see that for the actions that support resource-level permissions, the access is limited to the us-west-2 Region.

Screenshow showing that for the Actions that support resource-level permissions, the access is limited to the us-west-2 region.

Conclusion

Policy summaries make it easier to view and understand the permissions and resources in your IAM policies by displaying the permissions granted by the policies. As I’ve demonstrated in this post, you can also use policy summaries to help you identify and correct your IAM policies. To understand the types of warnings that policy summaries support, you can visit Troubleshoot IAM Policies. To view policy summaries in your AWS account, sign in to the IAM console and navigate to any policy on the Policies page of the IAM console or the Permissions tab on a user’s page.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum or contact AWS Support.

– Joy

Prime Day 2017 – Powered by AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/prime-day-2017-powered-by-aws/

The third annual Prime Day set another round of records for global orders, topping Black Friday and Cyber Monday, making it the biggest day in Amazon retail history. Over the course of the 30 hour event, tens of millions of Prime members purchased things like Echo Dots, Fire tablets, programmable pressure cookers, espresso machines, rechargeable batteries, and much more! July 11th also set a record for the number of new Prime memberships, as people signed up in order to take advantage of hundreds of thousands of deals. Amazon customers shopped online and made heavy use of the Amazon App, with mobile orders more than doubling from last Prime Day.

Powered by AWS
Last year I told you about How AWS Powered Amazon’s Biggest Day Ever, and shared what the team had learned with regard to preparation, automation, monitoring, and thinking big. All of those lessons still apply and you can read that post to learn more. Preparation for this year’s Prime Day (which started just days after Prime Day 2016 wrapped up) started by collecting and sharing best practices and identifying areas for improvement, proceeding to implementation and stress testing as the big day approached. Two of the best practices involve auditing and GameDay:

Auditing – This is a formal way for us to track preparations, identify risks, and to track progress against our objectives. Each team must respond to a series of detailed technical and operational questions that are designed to help them determine their readiness. On the technical side, questions could revolve around time to recovery after a database failure, including the all-important check of the TTL (time to live) for the CNAME. Operational questions address schedules for on-call personnel, points of contact, and ownership of services & instances.

GameDay – This practice (which I believe originated with former Amazonian Jesse Robbins), is intended to validate all of the capacity planning & preparation and to verify that all of the necessary operational practices are in place and work as expected. It introduces simulated failures and helps to train the team to identify and quickly resolve issues, building muscle memory in the process. It also tests failover and recovery capabilities, and can expose latent defects that are lurking under the covers. GameDays help teams to understand scaling drivers (page views, orders, and so forth) and gives them an opportunity to test their scaling practices. To learn more, read Resilience Engineering: Learning to Embrace Failure or watch the video: GameDay: Creating Resiliency Through Destruction.

Prime Day 2017 Metrics
So, how did we do this year?

The AWS teams checked their dashboards and log files, and were happy to share their metrics with me. Here are a few of the most interesting ones:

Block Storage – Use of Amazon Elastic Block Store (EBS) grew by 40% year-over-year, with aggregate data transfer jumping to 52 petabytes (a 50% increase) for the day and total I/O requests rising to 835 million (a 30% increase). The team told me that they loved the elasticity of EBS, and that they were able to ramp down on capacity after Prime Day concluded instead of being stuck with it.

NoSQL Database – Amazon DynamoDB requests from Alexa, the Amazon.com sites, and the Amazon fulfillment centers totaled 3.34 trillion, peaking at 12.9 million per second. According to the team, the extreme scale, consistent performance, and high availability of DynamoDB let them meet needs of Prime Day without breaking a sweat.

Stack Creation – Nearly 31,000 AWS CloudFormation stacks were created for Prime Day in order to bring additional AWS resources on line.

API Usage – AWS CloudTrail processed over 50 billion events and tracked more than 419 billion calls to various AWS APIs, all in support of Prime Day.

Configuration TrackingAWS Config generated over 14 million Configuration items for AWS resources.

You Can Do It
Running an event that is as large, complex, and mission-critical as Prime Day takes a lot of planning. If you have an event of this type in mind, please take a look at our new Infrastructure Event Readiness white paper. Inside, you will learn how to design and provision your applications to smoothly handle planned scaling events such as product launches or seasonal traffic spikes, with sections on automation, resiliency, cost optimization, event management, and more.

Jeff;

 

Now Available – EC2 Instances with 4 TB of Memory

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-ec2-instances-with-4-tb-of-memory/

Earlier this year I told you about our plan to launch EC2 instances with up to 16 TB of memory. Today I am happy to announce that the new x1e.32xlarge instances with 4 TB of DDR4 memory are available in four AWS Regions. As I wrote in my earlier post, these instances are designed to run SAP HANA and other memory intensive, in-memory applications. Many of our customers are already running production SAP applications on the existing x1.32xlarge instances. With today’s launch, these customers can now store and process far larger data sets, making them a great fit for larger production deployments.

Like the x1.32xlarge, the x1e.32xlarge is powered by quad socket Intel Xeon E7 8880 v3 Haswell processors running at 2.3GHz (128 vCPUs), with large L3 caches, plenty of memory bandwidth, and support for C-state and P-state management.

On the network side, the instances offer up to 25 Gbps of network bandwidth when launched within an EC2 placement group, powered by the Elastic Network Adapter (ENA), with support for up to 8 Elastic Network Interfaces (ENIs) per instance. The instances are EBS-optimized by default, with an additional 14 Gbps of dedicated bandwidth to your EBS volumes, and support for up to 80,000 IOPS per instance. Each instance also includes a pair of 1,920 GB SSD volumes.

A Few Notes
Here are a couple of things to keep in mind regarding the x1e.32xlarge:

SAP Certification – The x1e.32xlarge instances are our largest cloud-native instances certified and supported by SAP for production HANA deployments of SAP Business Suite on HANA (SoH), SAP Business Warehouse on HANA (BWoH), and the next-generation SAP S/4HANA ERP and SAP BW/4HANA data warehouse solution. If you are already running SAP HANA workloads on smaller X1 instances, scaling up will be quick and easy. The SAP HANA on the AWS Cloud Quick Start Reference Deployment has been updated and will help you to set up a deployment that follows SAP and AWS standards for high performance and reliability. The SAP HANA Hardware Directory and the SAP HANA Sizing Guidelines are also relevant.

Reserved Instances – The regional size flexibility for Reserved Instances does not apply across x1 and x1e.

Now Available
The x1e.32xlarge instances can be launched in On-Demand and Reserved Instance form via the AWS Management Console, AWS Command Line Interface (CLI), AWS SDKs, and AWS Marketplace in the US East (Northern Virginia), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo) Regions.

I would also like to make you aware of a couple of other upgrades to the X1 instances:

EBS – As part of today’s launch, existing X1 instances also support up to 14 Gbps of dedicated bandwidth to EBS, along with 80,000 IOPS per instance.

Network – Earlier this week, we announced that existing x1.32xlarge instances also support up to 25 Gbps of network bandwidth within placement groups.

Jeff;

Manage Kubernetes Clusters on AWS Using CoreOS Tectonic

Post Syndicated from Arun Gupta original https://aws.amazon.com/blogs/compute/kubernetes-clusters-aws-coreos-tectonic/

There are multiple ways to run a Kubernetes cluster on Amazon Web Services (AWS). The first post in this series explained how to manage a Kubernetes cluster on AWS using kops. This second post explains how to manage a Kubernetes cluster on AWS using CoreOS Tectonic.

Tectonic overview

Tectonic delivers the most current upstream version of Kubernetes with additional features. It is a commercial offering from CoreOS and adds the following features over the upstream:

  • Installer
    Comes with a graphical installer that installs a highly available Kubernetes cluster. Alternatively, the cluster can be installed using AWS CloudFormation templates or Terraform scripts.
  • Operators
    An operator is an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex stateful applications on behalf of a Kubernetes user. This release includes an etcd operator for rolling upgrades and a Prometheus operator for monitoring capabilities.
  • Console
    A web console provides a full view of applications running in the cluster. It also allows you to deploy applications to the cluster and start the rolling upgrade of the cluster.
  • Monitoring
    Node CPU and memory metrics are powered by the Prometheus operator. The graphs are available in the console. A large set of preconfigured Prometheus alerts are also available.
  • Security
    Tectonic ensures that cluster is always up to date with the most recent patches/fixes. Tectonic clusters also enable role-based access control (RBAC). Different roles can be mapped to an LDAP service.
  • Support
    CoreOS provides commercial support for clusters created using Tectonic.

Tectonic can be installed on AWS using a GUI installer or Terraform scripts. The installer prompts you for the information needed to boot the Kubernetes cluster, such as AWS access and secret key, number of master and worker nodes, and instance size for the master and worker nodes. The cluster can be created after all the options are specified. Alternatively, Terraform assets can be downloaded and the cluster can be created later. This post shows using the installer.

CoreOS License and Pull Secret

Even though Tectonic is a commercial offering, a cluster for up to 10 nodes can be created by creating a free account at Get Tectonic for Kubernetes. After signup, a CoreOS License and Pull Secret files are provided on your CoreOS account page. Download these files as they are needed by the installer to boot the cluster.

IAM user permission

The IAM user to create the Kubernetes cluster must have access to the following services and features:

  • Amazon Route 53
  • Amazon EC2
  • Elastic Load Balancing
  • Amazon S3
  • Amazon VPC
  • Security groups

Use the aws-policy policy to grant the required permissions for the IAM user.

DNS configuration

A subdomain is required to create the cluster, and it must be registered as a public Route 53 hosted zone. The zone is used to host and expose the console web application. It is also used as the static namespace for the Kubernetes API server. This allows kubectl to be able to talk directly with the master.

The domain may be registered using Route 53. Alternatively, a domain may be registered at a third-party registrar. This post uses a kubernetes-aws.io domain registered at a third-party registrar and a tectonic subdomain within it.

Generate a Route 53 hosted zone using the AWS CLI. Download jq to run this command:

ID=$(uuidgen) && \
aws route53 create-hosted-zone \
--name tectonic.kubernetes-aws.io \
--caller-reference $ID \
| jq .DelegationSet.NameServers

The command shows an output such as the following:

[
  "ns-1924.awsdns-48.co.uk",
  "ns-501.awsdns-62.com",
  "ns-1259.awsdns-29.org",
  "ns-749.awsdns-29.net"
]

Create NS records for the domain with your registrar. Make sure that the NS records can be resolved using a utility like dig web interface. A sample output would look like the following:

The bottom of the screenshot shows NS records configured for the subdomain.

Download and run the Tectonic installer

Download the Tectonic installer (version 1.7.1) and extract it. The latest installer can always be found at coreos.com/tectonic. Start the installer:

./tectonic/tectonic-installer/$PLATFORM/installer

Replace $PLATFORM with either darwin or linux. The installer opens your default browser and prompts you to select the cloud provider. Choose Amazon Web Services as the platform. Choose Next Step.

Specify the Access Key ID and Secret Access Key for the IAM role that you created earlier. This allows the installer to create resources required for the Kubernetes cluster. This also gives the installer full access to your AWS account. Alternatively, to protect the integrity of your main AWS credentials, use a temporary session token to generate temporary credentials.

You also need to choose a region in which to install the cluster. For the purpose of this post, I chose a region close to where I live, Northern California. Choose Next Step.

Give your cluster a name. This name is part of the static namespace for the master and the address of the console.

To enable in-place update to the Kubernetes cluster, select the checkbox next to Automated Updates. It also enables update to the etcd and Prometheus operators. This feature may become a default in future releases.

Choose Upload “tectonic-license.txt” and upload the previously downloaded license file.

Choose Upload “config.json” and upload the previously downloaded pull secret file. Choose Next Step.

Let the installer generate a CA certificate and key. In this case, the browser may not recognize this certificate, which I discuss later in the post. Alternatively, you can provide a CA certificate and a key in PEM format issued by an authorized certificate authority. Choose Next Step.

Use the SSH key for the region specified earlier. You also have an option to generate a new key. This allows you to later connect using SSH into the Amazon EC2 instances provisioned by the cluster. Here is the command that can be used to log in:

ssh –i <key> [email protected]<ec2-instance-ip>

Choose Next Step.

Define the number and instance type of master and worker nodes. In this case, create a 6 nodes cluster. Make sure that the worker nodes have enough processing power and memory to run the containers.

An etcd cluster is used as persistent storage for all of Kubernetes API objects. This cluster is required for the Kubernetes cluster to operate. There are three ways to use the etcd cluster as part of the Tectonic installer:

  • (Default) Provision the cluster using EC2 instances. Additional EC2 instances are used in this case.
  • Use an alpha support for cluster provisioning using the etcd operator. The etcd operator is used for automated operations of the etcd master nodes for the cluster itself, in addition to for etcd instances that are created for application usage. The etcd cluster is provisioned within the Tectonic installer.
  • Bring your own pre-provisioned etcd cluster.

Use the first option in this case.

For more information about choosing the appropriate instance type, see the etcd hardware recommendation. Choose Next Step.

Specify the networking options. The installer can create a new public VPC or use a pre-existing public or private VPC. Make sure that the VPC requirements are met for an existing VPC.

Give a DNS name for the cluster. Choose the domain for which the Route 53 hosted zone was configured earlier, such as tectonic.kubernetes-aws.io. Multiple clusters may be created under a single domain. The cluster name and the DNS name would typically match each other.

To select the CIDR range, choose Show Advanced Settings. You can also choose the Availability Zones for the master and worker nodes. By default, the master and worker nodes are spread across multiple Availability Zones in the chosen region. This makes the cluster highly available.

Leave the other values as default. Choose Next Step.

Specify an email address and password to be used as credentials to log in to the console. Choose Next Step.

At any point during the installation, you can choose Save progress. This allows you to save configurations specified in the installer. This configuration file can then be used to restore progress in the installer at a later point.

To start the cluster installation, choose Submit. At another time, you can download the Terraform assets by choosing Manually boot. This allows you to boot the cluster later.

The logs from the Terraform scripts are shown in the installer. When the installation is complete, the console shows that the Terraform scripts were successfully applied, the domain name was resolved successfully, and that the console has started. The domain works successfully if the DNS resolution worked earlier, and it’s the address where the console is accessible.

Choose Download assets to download assets related to your cluster. It contains your generated CA, kubectl configuration file, and the Terraform state. This download is an important step as it allows you to delete the cluster later.

Choose Next Step for the final installation screen. It allows you to access the Tectonic console, gives you instructions about how to configure kubectl to manage this cluster, and finally deploys an application using kubectl.

Choose Go to my Tectonic Console. In our case, it is also accessible at http://cluster.tectonic.kubernetes-aws.io/.

As I mentioned earlier, the browser does not recognize the self-generated CA certificate. Choose Advanced and connect to the console. Enter the login credentials specified earlier in the installer and choose Login.

The Kubernetes upstream and console version are shown under Software Details. Cluster health shows All systems go and it means that the API server and the backend API can be reached.

To view different Kubernetes resources in the cluster choose, the resource in the left navigation bar. For example, all deployments can be seen by choosing Deployments.

By default, resources in the all namespace are shown. Other namespaces may be chosen by clicking on a menu item on the top of the screen. Different administration tasks such as managing the namespaces, getting list of the nodes and RBAC can be configured as well.

Download and run Kubectl

Kubectl is required to manage the Kubernetes cluster. The latest version of kubectl can be downloaded using the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl

It can also be conveniently installed using the Homebrew package manager. To find and access a cluster, Kubectl needs a kubeconfig file. By default, this configuration file is at ~/.kube/config. This file is created when a Kubernetes cluster is created from your machine. However, in this case, download this file from the console.

In the console, choose admin, My Account, Download Configuration and follow the steps to download the kubectl configuration file. Move this file to ~/.kube/config. If kubectl has already been used on your machine before, then this file already exists. Make sure to take a backup of that file first.

Now you can run the commands to view the list of deployments:

~ $ kubectl get deployments --all-namespaces
NAMESPACE         NAME                                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system       etcd-operator                           1         1         1            1           43m
kube-system       heapster                                1         1         1            1           40m
kube-system       kube-controller-manager                 3         3         3            3           43m
kube-system       kube-dns                                1         1         1            1           43m
kube-system       kube-scheduler                          3         3         3            3           43m
tectonic-system   container-linux-update-operator         1         1         1            1           40m
tectonic-system   default-http-backend                    1         1         1            1           40m
tectonic-system   kube-state-metrics                      1         1         1            1           40m
tectonic-system   kube-version-operator                   1         1         1            1           40m
tectonic-system   prometheus-operator                     1         1         1            1           40m
tectonic-system   tectonic-channel-operator               1         1         1            1           40m
tectonic-system   tectonic-console                        2         2         2            2           40m
tectonic-system   tectonic-identity                       2         2         2            2           40m
tectonic-system   tectonic-ingress-controller             1         1         1            1           40m
tectonic-system   tectonic-monitoring-auth-alertmanager   1         1         1            1           40m
tectonic-system   tectonic-monitoring-auth-prometheus     1         1         1            1           40m
tectonic-system   tectonic-prometheus-operator            1         1         1            1           40m
tectonic-system   tectonic-stats-emitter                  1         1         1            1           40m

This output is similar to the one shown in the console earlier. Now, this kubectl can be used to manage your resources.

Upgrade the Kubernetes cluster

Tectonic allows the in-place upgrade of the cluster. This is an experimental feature as of this release. The clusters can be updated either automatically, or with manual approval.

To perform the update, choose Administration, Cluster Settings. If an earlier Tectonic installer, version 1.6.2 in this case, is used to install the cluster, then this screen would look like the following:

Choose Check for Updates. If any updates are available, choose Start Upgrade. After the upgrade is completed, the screen is refreshed.

This is an experimental feature in this release and so should only be used on clusters that can be easily replaced. This feature may become a fully supported in a future release. For more information about the upgrade process, see Upgrading Tectonic & Kubernetes.

Delete the Kubernetes cluster

Typically, the Kubernetes cluster is a long-running cluster to serve your applications. After its purpose is served, you may delete it. It is important to delete the cluster as this ensures that all resources created by the cluster are appropriately cleaned up.

The easiest way to delete the cluster is using the assets downloaded in the last step of the installer. Extract the downloaded zip file. This creates a directory like <cluster-name>_TIMESTAMP. In that directory, give the following command to delete the cluster:

TERRAFORM_CONFIG=$(pwd)/.terraformrc terraform destroy --force

This destroys the cluster and all associated resources.

You may have forgotten to download the assets. There is a copy of the assets in the directory tectonic/tectonic-installer/darwin/clusters. In this directory, another directory with the name <cluster-name>_TIMESTAMP contains your assets.

Conclusion

This post explained how to manage Kubernetes clusters using the CoreOS Tectonic graphical installer.  For more details, see Graphical Installer with AWS. If the installation does not succeed, see the helpful Troubleshooting tips. After the cluster is created, see the Tectonic tutorials to learn how to deploy, scale, version, and delete an application.

Future posts in this series will explain other ways of creating and running a Kubernetes cluster on AWS.

Arun

Digitising film reels with Pi Film Capture

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/digitising-reels-pi-film-capture/

Joe Herman’s Pi Film Capture project combines old projectors and a stepper motor with a Raspberry Pi and a Raspberry Pi Camera Module, to transform his grandfather’s 8- and 16-mm home movies into glorious digital films.

We chatted to him about his Pi Film Capture build at Maker Faire New York 2016:

Film to Digital Conversion at Maker Faire New York 2016

Uploaded by Raspberry Pi on 2017-08-25.

What inspired Pi Film Capture?

Joe’s grandfather, Leo Willmott, loved recording home movies of his family of eight children and their grandchildren. He passed away when Joe was five, but in 2013 Joe found a way to connect with his legacy: while moving house, a family member uncovered a box of more than a hundred of Leo’s film reels. These covered decades of family history, and some dated back as far as 1939.

Super 8 film reels

Kodachrome film reels of the type Leo used

This provided an unexpected opportunity for Leo’s family to restore some of their shared history. Joe immediately made plans to digitise the material, knowing that the members of his extensive family tree would provide an eager audience.

Building Pi Film Capture

After a failed attempt with a DSLR camera, Joe realised he couldn’t simply re-film the movies — instead, he would have to capture each frame individually. He combined a Raspberry Pi with an old Super 8 projector, and set about rigging up something to do just that.

He went through numerous stages of prototyping, and his final hardware setup works very well. A NEMA 17 stepper motor  moves the film reel forward in the projector. A magnetic reed switch triggers the Camera Module each time the reel moves on to the next frame. Joe hacked the Camera Module so that it has a different focal distance, and he also added a magnifying lens. Moreover, he realised it would be useful to have a diffuser to ‘smooth’ some of the faults in the aged film reel material. To do this, he mounted “a bit of translucent white plastic from an old ceiling fixture” parallel with the film.

Pi Film Capture device by Joe Herman

Joe’s 16-mm projector, with embedded Raspberry Pi hardware

Software solutions

In addition to capturing every single frame (sometimes with multiple exposure settings), Joe found that he needed intensive post-processing to restore some of the films. He settled on sending the images from the Pi to a more powerful Linux machine. To enable processing of the raw data, he had to write Python scripts implementing several open-source software packages. For example, to deal with the varying quality of the film reels more easily, Joe implemented a GUI (written with the help of PyQt), which he uses to change the capture parameters. This was a demanding job, as he was relatively new to using these tools.

Top half of GUI for Pi Film Capture Joe Herman

The top half of Joe’s GUI, because the whole thing is really long and really thin and would have looked weird on the blog…

If a frame is particularly damaged, Joe can capture multiple instances of the image at different settings. These are then merged to achieve a good-quality image using OpenCV functionality. Joe uses FFmpeg to stitch the captured images back together into a film. Some of his grandfather’s reels were badly degraded, but luckily Joe found scripts written by other people to perform advanced digital restoration of film with AviSynth. He provides code he has written for the project on his GitHub account.

For an account of the project in his own words, check out Joe’s guest post on the IEEE Spectrum website. He also described some of the issues he encountered, and how he resolved them, in The MagPi.

What does Pi Film Capture deliver?

Joe provides videos related to Pi Film Capture on two sites: on his YouTube channel, you’ll find videos in which he has documented the build process of his digitising project. Final results of the project live on Joe’s Vimeo channel, where so far he has uploaded 55 digitised home videos.

m093a: Tom Herman Wedding, Detroit 8/10/63

Shot on 8mm by Leo Willmott, captured and restored by Joe Herman (Not a Wozniak film, but placed in that folder b/c it may be of interest to Hermans)

We’re beyond pleased that our tech is part of this amazing project, helping to reconnect the entire Herman/Willmott clan with their past. And it was great to be able to catch up with Joe, and talk about his build at Maker Faire last year!

Maker Faire New York 2017

We’ll be at Maker Faire New York again on the 23-24 September, and we can’t wait to see the amazing makes the Raspberry Pi community will be presenting there!

Are you going to be at MFNY to show off your awesome Pi-powered project? Tweet us, so we can meet up, check it out and share your achievements!

The post Digitising film reels with Pi Film Capture appeared first on Raspberry Pi.

Delivering Graphics Apps with Amazon AppStream 2.0

Post Syndicated from Deepak Suryanarayanan original https://aws.amazon.com/blogs/compute/delivering-graphics-apps-with-amazon-appstream-2-0/

Sahil Bahri, Sr. Product Manager, Amazon AppStream 2.0

Do you need to provide a workstation class experience for users who run graphics apps? With Amazon AppStream 2.0, you can stream graphics apps from AWS to a web browser running on any supported device. AppStream 2.0 offers a choice of GPU instance types. The range includes the newly launched Graphics Design instance, which allows you to offer a fast, fluid user experience at a fraction of the cost of using a graphics workstation, without upfront investments or long-term commitments.

In this post, I discuss the Graphics Design instance type in detail, and how you can use it to deliver a graphics application such as Siemens NX―a popular CAD/CAM application that we have been testing on AppStream 2.0 with engineers from Siemens PLM.

Graphics Instance Types on AppStream 2.0

First, a quick recap on the GPU instance types available with AppStream 2.0. In July, 2017, we launched graphics support for AppStream 2.0 with two new instance types that Jeff Barr discussed on the AWS Blog:

  • Graphics Desktop
  • Graphics Pro

Many customers in industries such as engineering, media, entertainment, and oil and gas are using these instances to deliver high-performance graphics applications to their users. These instance types are based on dedicated NVIDIA GPUs and can run the most demanding graphics applications, including those that rely on CUDA graphics API libraries.

Last week, we added a new lower-cost instance type: Graphics Design. This instance type is a great fit for engineers, 3D modelers, and designers who use graphics applications that rely on the hardware acceleration of DirectX, OpenGL, or OpenCL APIs, such as Siemens NX, Autodesk AutoCAD, or Adobe Photoshop. The Graphics Design instance is based on AMD’s FirePro S7150x2 Server GPUs and equipped with AMD Multiuser GPU technology. The instance type uses virtualized GPUs to achieve lower costs, and is available in four instance sizes to scale and match the requirements of your applications.

Instance vCPUs Instance RAM (GiB) GPU Memory (GiB)
stream.graphics-design.large 2 7.5 GiB 1
stream.graphics-design.xlarge 4 15.3 GiB 2
stream.graphics-design.2xlarge 8 30.5 GiB 4
stream.graphics-design.4xlarge 16 61 GiB 8

The following table compares all three graphics instance types on AppStream 2.0, along with example applications you could use with each.

  Graphics Design Graphics Desktop Graphics Pro
Number of instance sizes 4 1 3
GPU memory range
1–8 GiB 4 GiB 8–32 GiB
vCPU range 2–16 8 16–32
Memory range 7.5–61 GiB 15 GiB 122–488 GiB
Graphics libraries supported AMD FirePro S7150x2 NVIDIA GRID K520 NVIDIA Tesla M60
Price range (N. Virginia AWS Region) $0.25 – $2.00/hour $0.5/hour $2.05 – $8.20/hour
Example applications Adobe Premiere Pro, AutoDesk Revit, Siemens NX AVEVA E3D, SOLIDWORKS AutoDesk Maya, Landmark DecisionSpace, Schlumberger Petrel

Example graphics instance set up with Siemens NX

In the section, I walk through setting up Siemens NX with Graphics Design instances on AppStream 2.0. After set up is complete, users can able to access NX from within their browser and also access their design files from a file share. You can also use these steps to set up and test your own graphics applications on AppStream 2.0. Here’s the workflow:

  1. Create a file share to load and save design files.
  2. Create an AppStream 2.0 image with Siemens NX installed.
  3. Create an AppStream 2.0 fleet and stack.
  4. Invite users to access Siemens NX through a browser.
  5. Validate the setup.

To learn more about AppStream 2.0 concepts and set up, see the previous post Scaling Your Desktop Application Streams with Amazon AppStream 2.0. For a deeper review of all the setup and maintenance steps, see Amazon AppStream 2.0 Developer Guide.

Step 1: Create a file share to load and save design files

To launch and configure the file server

  1. Open the EC2 console and choose Launch Instance.
  2. Scroll to the Microsoft Windows Server 2016 Base Image and choose Select.
  3. Choose an instance type and size for your file server (I chose the general purpose m4.large instance). Choose Next: Configure Instance Details.
  4. Select a VPC and subnet. You launch AppStream 2.0 resources in the same VPC. Choose Next: Add Storage.
  5. If necessary, adjust the size of your EBS volume. Choose Review and Launch, Launch.
  6. On the Instances page, give your file server a name, such as My File Server.
  7. Ensure that the security group associated with the file server instance allows for incoming traffic from the security group that you select for your AppStream 2.0 fleets or image builders. You can use the default security group and select the same group while creating the image builder and fleet in later steps.

Log in to the file server using a remote access client such as Microsoft Remote Desktop. For more information about connecting to an EC2 Windows instance, see Connect to Your Windows Instance.

To enable file sharing

  1. Create a new folder (such as C:\My Graphics Files) and upload the shared files to make available to your users.
  2. From the Windows control panel, enable network discovery.
  3. Choose Server Manager, File and Storage Services, Volumes.
  4. Scroll to Shares and choose Start the Add Roles and Features Wizard. Go through the wizard to install the File Server and Share role.
  5. From the left navigation menu, choose Shares.
  6. Choose Start the New Share Wizard to set up your folder as a file share.
  7. Open the context (right-click) menu on the share and choose Properties, Permissions, Customize Permissions.
  8. Choose Permissions, Add. Add Read and Execute permissions for everyone on the network.

Step 2:  Create an AppStream 2.0 image with Siemens NX installed

To connect to the image builder and install applications

  1. Open the AppStream 2.0 management console and choose Images, Image Builder, Launch Image Builder.
  2. Create a graphics design image builder in the same VPC as your file server.
  3. From the Image builder tab, select your image builder and choose Connect. This opens a new browser tab and display a desktop to log in to.
  4. Log in to your image builder as ImageBuilderAdmin.
  5. Launch the Image Assistant.
  6. Download and install Siemens NX and other applications on the image builder. I added Blender and Firefox, but you could replace these with your own applications.
  7. To verify the user experience, you can test the application performance on the instance.

Before you finish creating the image, you must mount the file share by enabling a few Microsoft Windows services.

To mount the file share

  1. Open services.msc and check the following services:
  • DNS Client
  • Function Discovery Resource Publication
  • SSDP Discovery
  • UPnP Device H
  1. If any of the preceding services have Startup Type set to Manual, open the context (right-click) menu on the service and choose Start. Otherwise, open the context (right-click) menu on the service and choose Properties. For Startup Type, choose Manual, Apply. To start the service, choose Start.
  2. From the Windows control panel, enable network discovery.
  3. Create a batch script that mounts a file share from the storage server set up earlier. The file share is mounted automatically when a user connects to the AppStream 2.0 environment.

Logon Script Location: C:\Users\Public\logon.bat

Script Contents:

:loop

net use H: \\path\to\network\share 

PING localhost -n 30 >NUL

IF NOT EXIST H:\ GOTO loop

  1. Open gpedit.msc and choose User Configuration, Windows Settings, Scripts. Set logon.bat as the user logon script.
  2. Next, create a batch script that makes the mounted drive visible to the user.

Logon Script Location: C:\Users\Public\startup.bat

Script Contents:
REG DELETE “HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer” /v “NoDrives” /f

  1. Open Task Scheduler and choose Create Task.
  2. Choose General, provide a task name, and then choose Change User or Group.
  3. For Enter the object name to select, enter SYSTEM and choose Check Names, OK.
  4. Choose Triggers, New. For Begin the task, choose At startup. Under Advanced Settings, change Delay task for to 5 minutes. Choose OK.
  5. Choose Actions, New. Under Settings, for Program/script, enter C:\Users\Public\startup.bat. Choose OK.
  6. Choose Conditions. Under Power, clear the Start the task only if the computer is on AC power Choose OK.
  7. To view your scheduled task, choose Task Scheduler Library. Close Task Scheduler when you are done.

Step 3:  Create an AppStream 2.0 fleet and stack

To create a fleet and stack

  1. In the AppStream 2.0 management console, choose Fleets, Create Fleet.
  2. Give the fleet a name, such as Graphics-Demo-Fleet, that uses the newly created image and the same VPC as your file server.
  3. Choose Stacks, Create Stack. Give the stack a name, such as Graphics-Demo-Stack.
  4. After the stack is created, select it and choose Actions, Associate Fleet. Associate the stack with the fleet you created in step 1.

Step 4:  Invite users to access Siemens NX through a browser

To invite users

  1. Choose User Pools, Create User to create users.
  2. Enter a name and email address for each user.
  3. Select the users just created, and choose Actions, Assign Stack to provide access to the stack created in step 2. You can also provide access using SAML 2.0 and connect to your Active Directory if necessary. For more information, see the Enabling Identity Federation with AD FS 3.0 and Amazon AppStream 2.0 post.

Your user receives an email invitation to set up an account and use a web portal to access the applications that you have included in your stack.

Step 5:  Validate the setup

Time for a test drive with Siemens NX on AppStream 2.0!

  1. Open the link for the AppStream 2.0 web portal shared through the email invitation. The web portal opens in your default browser. You must sign in with the temporary password and set a new password. After that, you get taken to your app catalog.
  2. Launch Siemens NX and interact with it using the demo files available in the shared storage folder – My Graphics Files. 

After I launched NX, I captured the screenshot below. The Siemens PLM team also recorded a video with NX running on AppStream 2.0.

Summary

In this post, I discussed the GPU instances available for delivering rich graphics applications to users in a web browser. While I demonstrated a simple setup, you can scale this out to launch a production environment with users signing in using Active Directory credentials,  accessing persistent storage with Amazon S3, and using other commonly requested features reviewed in the Amazon AppStream 2.0 Launch Recap – Domain Join, Simple Network Setup, and Lots More post.

To learn more about AppStream 2.0 and capabilities added this year, see Amazon AppStream 2.0 Resources.

Parallel Processing in Python with AWS Lambda

Post Syndicated from Oz Akan original https://aws.amazon.com/blogs/compute/parallel-processing-in-python-with-aws-lambda/

If you develop an AWS Lambda function with Node.js, you can call multiple web services without waiting for a response due to its asynchronous nature.  All requests are initiated almost in parallel, so you can get results much faster than a series of sequential calls to each web service. Considering the maximum execution duration for Lambda, it is beneficial for I/O bound tasks to run in parallel.

If you develop a Lambda function with Python, parallelism doesn’t come by default. Lambda supports Python 2.7 and Python 3.6, both of which have multiprocessing and threading modules. The multiprocessing module supports multiple cores so it is a better choice, especially for CPU intensive workloads. With the threading module, all threads are going to run on a single core though performance difference is negligible for network-bound tasks.

In this post, I demonstrate how the Python multiprocessing module can be used within a Lambda function to run multiple I/O bound tasks in parallel.

Example use case

In this example, you call Amazon EC2 and Amazon EBS API operations to find the total EBS volume size for all your EC2 instances in a region.

This is a two-step process:

  • The Lambda function calls EC2 to list all EC2 instances
  • The function calls EBS for each instance to find attached EBS volumes

Sequential Execution

If you make these calls sequentially, during the second step, your code has to loop over all the instances and wait for each response before moving to the next request.

The class named VolumesSequential has the following methods:

  • __init__ creates an EC2 resource.
  • total_size returns all EC2 instances and passes these to the instance_volumes method.
  • instance_volumes finds the total size of EBS volumes for the instance.
  • total_size adds all sizes from all instances to find total size for the EBS volumes.

Source Code for Sequential Execution

import time
import boto3

class VolumesSequential(object):
    """Finds total volume size for all EC2 instances"""
    def __init__(self):
        self.ec2 = boto3.resource('ec2')

    def instance_volumes(self, instance):
        """
        Finds total size of the EBS volumes attached
        to an EC2 instance
        """
        instance_total = 0
        for volume in instance.volumes.all():
            instance_total += volume.size
        return instance_total

    def total_size(self):
        """
        Lists all EC2 instances in the default region
        and sums result of instance_volumes
        """
        print "Running sequentially"
        instances = self.ec2.instances.all()
        instances_total = 0
        for instance in instances:
            instances_total += self.instance_volumes(instance)
        return instances_total

def lambda_handler(event, context):
    volumes = VolumesSequential()
    _start = time.time()
    total = volumes.total_size()
    print "Total volume size: %s GB" % total
    print "Sequential execution time: %s seconds" % (time.time() - _start)

Parallel Execution

The multiprocessing module that comes with Python 2.7 lets you run multiple processes in parallel. Due to the Lambda execution environment not having /dev/shm (shared memory for processes) support, you can’t use multiprocessing.Queue or multiprocessing.Pool.

If you try to use multiprocessing.Queue, you get an error similar to the following:

[Errno 38] Function not implemented: OSError
…
    sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue)
OSError: [Errno 38] Function not implemented

On the other hand, you can use multiprocessing.Pipe instead of multiprocessing.Queue to accomplish what you need without getting any errors during the execution of the Lambda function.

The class named VolumeParallel has the following methods:

  • __init__ creates an EC2 resource
  • instance_volumes finds the total size of EBS volumes attached to an instance
  • total_size finds all instances and runs instance_volumes for each to find the total size of all EBS volumes attached to all EC2 instances.

Source Code for Parallel Execution

import time
from multiprocessing import Process, Pipe
import boto3

class VolumesParallel(object):
    """Finds total volume size for all EC2 instances"""
    def __init__(self):
        self.ec2 = boto3.resource('ec2')

    def instance_volumes(self, instance, conn):
        """
        Finds total size of the EBS volumes attached
        to an EC2 instance
        """
        instance_total = 0
        for volume in instance.volumes.all():
            instance_total += volume.size
        conn.send([instance_total])
        conn.close()

    def total_size(self):
        """
        Lists all EC2 instances in the default region
        and sums result of instance_volumes
        """
        print "Running in parallel"

        # get all EC2 instances
        instances = self.ec2.instances.all()
        
        # create a list to keep all processes
        processes = []

        # create a list to keep connections
        parent_connections = []
        
        # create a process per instance
        for instance in instances:            
            # create a pipe for communication
            parent_conn, child_conn = Pipe()
            parent_connections.append(parent_conn)

            # create the process, pass instance and connection
            process = Process(target=self.instance_volumes, args=(instance, child_conn,))
            processes.append(process)

        # start all processes
        for process in processes:
            process.start()

        # make sure that all processes have finished
        for process in processes:
            process.join()

        instances_total = 0
        for parent_connection in parent_connections:
            instances_total += parent_connection.recv()[0]

        return instances_total


def lambda_handler(event, context):
    volumes = VolumesParallel()
    _start = time.time()
    total = volumes.total_size()
    print "Total volume size: %s GB" % total
    print "Sequential execution time: %s seconds" % (time.time() - _start)

Performance

There are a few differences between two Lambda functions when it comes to the execution environment. The parallel function requires more memory than the sequential one. You may run the parallel Lambda function with a relatively large memory setting to see how much memory it uses. The amount of memory required by the Lambda function depends on what the function does and how many processes it runs in parallel. To restrict maximum memory usage, you may want to limit the number of parallel executions.

In this case, when you give 1024 MB for both Lambda functions, the parallel function runs about two times faster than the sequential function. I have a handful of EC2 instances and EBS volumes in my account so the test ran way under the maximum execution limit for Lambda. Remember that parallel execution doesn’t guarantee that the runtime for the Lambda function will be under the maximum allowed duration but does speed up the overall execution time.

Sequential Run Time Output

START RequestId: 4c370b12-f9d3-11e6-b46b-b5d41afd648e Version: $LATEST
Running sequentially
Total volume size: 589 GB
Sequential execution time: 3.80066084862 seconds
END RequestId: 4c370b12-f9d3-11e6-b46b-b5d41afd648e
REPORT RequestId: 4c370b12-f9d3-11e6-b46b-b5d41afd648e Duration: 4091.59 ms Billed Duration: 4100 ms  Memory Size: 1024 MB Max Memory Used: 46 MB

Parallel Run Time Output

START RequestId: 4f1328ed-f9d3-11e6-8cd1-c7381c5c078d Version: $LATEST
Running in parallel
Total volume size: 589 GB
Sequential execution time: 1.89170885086 seconds
END RequestId: 4f1328ed-f9d3-11e6-8cd1-c7381c5c078d
REPORT RequestId: 4f1328ed-f9d3-11e6-8cd1-c7381c5c078d Duration: 2069.33 ms Billed Duration: 2100 ms  Memory Size: 1024 MB Max Memory Used: 181 MB 

Summary

In this post, I demonstrated how to run multiple I/O bound tasks in parallel by developing a Lambda function with the Python multiprocessing module. With the help of this module, you freed the CPU from waiting for I/O and fired up several tasks to fit more I/O bound operations into a given time frame. This might be the trick to reduce the overall runtime of a Lambda function especially when you have to run so many and don’t want to split the work into smaller chunks.

A Hardware Privacy Monitor for iPhones

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/a_hardware_priv.html

Andrew “bunnie” Huang and Edward Snowden have designed a hardware device that attaches to an iPhone and monitors it for malicious surveillance activities, even in instances where the phone’s operating system has been compromised. They call it an Introspection Engine, and their use model is a journalist who is concerned about government surveillance:

Our introspection engine is designed with the following goals in mind:

  1. Completely open source and user-inspectable (“You don’t have to trust us”)
  2. Introspection operations are performed by an execution domain completely separated from the phone”s CPU (“don’t rely on those with impaired judgment to fairly judge their state”)

  3. Proper operation of introspection system can be field-verified (guard against “evil maid” attacks and hardware failures)

  4. Difficult to trigger a false positive (users ignore or disable security alerts when there are too many positives)

  5. Difficult to induce a false negative, even with signed firmware updates (“don’t trust the system vendor” — state-level adversaries with full cooperation of system vendors should not be able to craft signed firmware updates that spoof or bypass the introspection engine)

  6. As much as possible, the introspection system should be passive and difficult to detect by the phone’s operating system (prevent black-listing/targeting of users based on introspection engine signatures)

  7. Simple, intuitive user interface requiring no specialized knowledge to interpret or operate (avoid user error leading to false negatives; “journalists shouldn’t have to be cryptographers to be safe”)

  8. Final solution should be usable on a daily basis, with minimal impact on workflow (avoid forcing field reporters into the choice between their personal security and being an effective journalist)

This looks like fantastic work, and they have a working prototype.

Of course, this does nothing to stop all the legitimate surveillance that happens over a cell phone: location tracking, records of who you talk to, and so on.

BoingBoing post.

New Network Load Balancer – Effortless Scaling to Millions of Requests per Second

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/

Elastic Load Balancing (ELB)) has been an important part of AWS since 2009, when it was launched as part of a three-pack that also included Auto Scaling and Amazon CloudWatch. Since that time we have added many features, and also introduced the Application Load Balancer. Designed to support application-level, content-based routing to applications that run in containers, Application Load Balancers pair well with microservices, streaming, and real-time workloads.

Over the years, our customers have used ELB to support web sites and applications that run at almost any scale — from simple sites running on a T2 instance or two, all the way up to complex applications that run on large fleets of higher-end instances and handle massive amounts of traffic. Behind the scenes, ELB monitors traffic and automatically scales to meet demand. This process, which includes a generous buffer of headroom, has become quicker and more responsive over the years and works well even for our customers who use ELB to support live broadcasts, “flash” sales, and holidays. However, in some situations such as instantaneous fail-over between regions, or extremely spiky workloads, we have worked with our customers to pre-provision ELBs in anticipation of a traffic surge.

New Network Load Balancer
Today we are introducing the new Network Load Balancer (NLB). It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:

Static IP Addresses – Each Network Load Balancer provides a single IP address for each VPC subnet in its purview. If you have targets in a subnet in us-west-2a and other targets in a subnet in us-west-2c, NLB will create and manage two IP addresses (one per subnet); connections to that IP address will spread traffic across the instances in the subnet. You can also specify an existing Elastic IP for each subnet for even greater control. With full control over your IP addresses, Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.

Zonality – The IP-per-subnet feature reduces latency with improved performance, improves availability through isolation and fault tolerance and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single subnet while still allowing automatic failover.

Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.

Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.

Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.

Creating a Network Load Balancer
I can create a Network Load Balancer opening up the EC2 Console, selecting Load Balancers, and clicking on Create Load Balancer:

I choose Network Load Balancer and click on Create, then enter the details. I can choose an Elastic IP address for each subnet in the target VPC and I can tag the Network Load Balancer:

Then I click on Configure Routing and create a new target group. I enter a name, and then choose the protocol and port. I can also set up health checks that go to the traffic port or to the alternate of my choice:

Then I click on Register Targets and the EC2 instances that will receive traffic, and click on Add to registered:

I make sure that everything looks good and then click on Create:

The state of my new Load Balancer is provisioning, switching to active within a minute or so:

For testing purposes, I simply grab the DNS name of the Load Balancer from the console (in practice I would use Amazon Route 53 and a more friendly name):

Then I sent it a ton of traffic (I intended to let it run for just a second or two but got distracted and it created a huge number of processes, so this was a happy accident):

$ while true;
> do
>   wget http://nlb-1-6386cc6bf24701af.elb.us-west-2.amazonaws.com/phpinfo2.php &
> done

A more disciplined test would use a tool like Bees with Machine Guns, of course!

I took a quick break to let some traffic flow and then checked the CloudWatch metrics for my Load Balancer, finding that it was able to handle the sudden onslaught of traffic with ease:

I also looked at my EC2 instances to see how they were faring under the load (really well, it turns out):

It turns out that my colleagues did run a more disciplined test than I did. They set up a Network Load Balancer and backed it with an Auto Scaled fleet of EC2 instances. They set up a second fleet composed of hundreds of EC2 instances, each running Bees with Machine Guns and configured to generate traffic with highly variable request and response sizes. Beginning at 1.5 million requests per second, they quickly turned the dial all the way up, reaching over 3 million requests per second and 30 Gbps of aggregate bandwidth before maxing out their test resources.

Choosing a Load Balancer
As always, you should consider the needs of your application when you choose a load balancer. Here are some guidelines:

Network Load Balancer (NLB) – Ideal for load balancing of TCP traffic, NLB is capable of handling millions of requests per second while maintaining ultra-low latencies. NLB is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone.

Application Load Balancer (ALB) – Ideal for advanced load balancing of HTTP and HTTPS traffic, ALB provides advanced request routing that supports modern application architectures, including microservices and container-based applications.

Classic Load Balancer (CLB) – Ideal for applications that were built within the EC2-Classic network.

For a side-by-side feature comparison, see the Elastic Load Balancer Details table.

If you are currently using a Classic Load Balancer and would like to migrate to a Network Load Balancer, take a look at our new Load Balancer Copy Utility. This Python tool will help you to create a Network Load Balancer with the same configuration as an existing Classic Load Balancer. It can also register your existing EC2 instances with the new load balancer.

Pricing & Availability
Like the Application Load Balancer, pricing is based on Load Balancer Capacity Units, or LCUs. Billing is $0.006 per LCU, based on the highest value seen across the following dimensions:

  • Bandwidth – 1 GB per LCU.
  • New Connections – 800 per LCU.
  • Active Connections – 100,000 per LCU.

Most applications are bandwidth-bound and should see a cost reduction (for load balancing) of about 25% when compared to Application or Classic Load Balancers.

Network Load Balancers are available today in all AWS commercial regions except China (Beijing), supported by AWS CloudFormation, Auto Scaling, and Amazon ECS.

Jeff;

 

Disabling Intel Hyper-Threading Technology on Amazon EC2 Windows Instances

Post Syndicated from Brian Beach original https://aws.amazon.com/blogs/compute/disabling-intel-hyper-threading-technology-on-amazon-ec2-windows-instances/

In a prior post, Disabling Intel Hyper-Threading on Amazon Linux, I investigated how the Linux kernel enumerates CPUs. I also discussed the options to disable Intel Hyper-Threading (HT Technology) in Amazon Linux running on Amazon EC2.

In this post, I do the same for Microsoft Windows Server 2016 running on EC2 instances. I begin with a quick review of HT Technology and the reasons you might want to disable it. I also recommend that you take a moment to review the prior post for a more thorough foundation.

HT Technology

HT Technology makes a single physical processor appear as multiple logical processors. Each core in an Intel Xeon processor has two threads of execution. Most of the time, these threads can progress independently; one thread executing while the other is waiting on a relatively slow operation (for example, reading from memory) to occur. However, the two threads do share resources and occasionally one thread is forced to wait while the other is executing.

There a few unique situations where disabling HT Technology can improve performance. One example is high performance computing (HPC) workloads that rely heavily on floating point operations. In these rare cases, it can be advantageous to disable HT Technology. However, these cases are rare, and for the overwhelming majority of workloads you should leave it enabled. I recommend that you test with and without HT Technology enabled, and only disable threads if you are sure it will improve performance.

Exploring HT Technology on Microsoft Windows

Here’s how Microsoft Windows enumerates CPUs. As before, I am running these examples on an m4.2xlarge. I also chose to run Windows Server 2016, but you can walk through these exercises on any version of Windows. Remember that the m4.2xlarge has eight vCPUs, and each vCPU is a thread of an Intel Xeon core. Therefore, the m4.2xlarge has four cores, each of which run two threads, resulting in eight vCPUs.

Windows does not have a built-in utility to examine CPU configuration, but you can download the Sysinternals coreinfo utility from Microsoft’s website. This utility provides useful information about the system CPU and memory topology. For this walkthrough, you enumerate the individual CPUs, which you can do by running coreinfo -c. For example:

C:\Users\Administrator >coreinfo -c

Coreinfo v3.31 - Dump information on system CPU and memory topology
Copyright (C) 2008-2014 Mark Russinovich
Sysinternals - www.sysinternals.com

Logical to Physical Processor Map:
**------ Physical Processor 0 (Hyperthreaded)
--**---- Physical Processor 1 (Hyperthreaded)
----**-- Physical Processor 2 (Hyperthreaded)
------** Physical Processor 3 (Hyperthreaded)

As you can see from the screenshot, the coreinfo utility displays a table where each row is a physical core and each column is a logical CPU. In other words, the two asterisks on the first line indicate that CPU 0 and CPU 1 are the two threads in the first physical core. Therefore, my m4.2xlarge has for four physical processors and each processor has two threads resulting in eight total CPUs, just as expected.

It is interesting to note that Windows Server 2016 enumerates CPUs in a different order than Linux. Remember from the prior post that Linux enumerated the first thread in each core, followed by the second thread in each core. You can see from the output earlier that Windows Server 2016, enumerates both threads in the first core, then both threads in the second core, and so on. The diagram below shows the relationship of CPUs to cores and threads in both operating systems.

In the Linux post, I disabled CPUs 4–6, leaving one thread per core, and effectively disabling HT Technology. You can see from the diagram that you must disable the odd-numbered threads (that is, 1, 3, 5, and 7) to achieve the same result in Windows. Here’s how to do that.

Disabling HT Technology on Microsoft Windows

In Linux, you can globally disable CPUs dynamically. In Windows, there is no direct equivalent that I could find, but there are a few alternatives.

First, you can disable CPUs using the msconfig.exe tool. If you choose Boot, Advanced Options, you have the option to set the number of processors. In the example below, I limit my m4.2xlarge to four CPUs. Restart for this change to take effect.

Unfortunately, Windows does not disable hyperthreaded CPUs first and then real cores, as Linux does. As you can see in the following output, coreinfo reports that my c4.2xlarge has two real cores and four hyperthreads, after rebooting. Msconfig.exe is useful for disabling cores, but it does not allow you to disable HT Technology.

Note: If you have been following along, you can re-enable all your CPUs by unselecting the Number of processors check box and rebooting your system.

 

C:\Users\Administrator >coreinfo -c

Coreinfo v3.31 - Dump information on system CPU and memory topology
Copyright (C) 2008-2014 Mark Russinovich
Sysinternals - www.sysinternals.com

Logical to Physical Processor Map:
**-- Physical Processor 0 (Hyperthreaded)
--** Physical Processor 1 (Hyperthreaded)

While you cannot disable HT Technology systemwide, Windows does allow you to associate a particular process with one or more CPUs. Microsoft calls this, “processor affinity”. To see an example, use the following steps.

  1. Launch an instance of Notepad.
  2. Open Windows Task Manager and choose Processes.
  3. Open the context (right click) menu on notepad.exe and choose Set Affinity….

This brings up the Processor Affinity dialog box.

As you can see, all the CPUs are allowed to run this instance of notepad.exe. You can uncheck a few CPUs to exclude them. Windows is smart enough to allow any scheduled operations to continue to completion on disabled CPUs. It then saves its state at the next scheduling event, and resumes those operations on another CPU. To ensure that only one thread in each core is able to run a process, you uncheck every other core. This effectively disables HT Technology for this process. For example:

Of course, this can be tedious when you have a large number of cores. Remember that the x1.32xlarge has 128 CPUs. Luckily, you can set the affinity of a running process from PowerShell using the Get-Process cmdlet. For example:

PS C:\&gt; (Get-Process -Name 'notepad').ProcessorAffinity = 0x55;

The ProcessorAffinity attribute takes a bitmask in hexadecimal format. 0x55 in hex is equivalent to 01010101 in binary. Think of the binary encoding as 1=enabled and 0=disabled. This is slightly confusing, but we work left to right so that CPU 0 is the rightmost bit and CPU 7 is the leftmost bit. Therefore, 01010101 means that the first thread in each CPU is enabled just as it was in the diagram earlier.

The calculator built into Windows includes a “programmer view” that helps you convert from hexadecimal to binary. In addition, the ProcessorAffinity attribute is a 64-bit number. Therefore, you can only configure the processor affinity on systems up to 64 CPUs. At the moment, only the x1.32xlarge has more than 64 vCPUs.

In the preceding examples, you changed the processor affinity of a running process. Sometimes, you want to start a process with the affinity already configured. You can do this using the start command. The start command includes an affinity flag that takes a hexadecimal number like the PowerShell example earlier.

C:\Users\Administrator&gt;start /affinity 55 notepad.exe

It is interesting to note that a child process inherits the affinity from its parent. For example, the following commands create a batch file that launches Notepad, and starts the batch file with the affinity set. If you examine the instance of Notepad launched by the batch file, you see that the affinity has been applied to as well.

C:\Users\Administrator&gt;echo notepad.exe > test.bat
C:\Users\Administrator&gt;start /affinity 55 test.bat

This means that you can set the affinity of your task scheduler and any tasks that the scheduler starts inherits the affinity. So, you can disable every other thread when you launch the scheduler and effectively disable HT Technology for all of the tasks as well. Be sure to test this point, however, as some schedulers override the normal inheritance behavior and explicitly set processor affinity when starting a child process.

Conclusion

While the Windows operating system does not allow you to disable logical CPUs, you can set processor affinity on individual processes. You also learned that Windows Server 2016 enumerates CPUs in a different order than Linux. Therefore, you can effectively disable HT Technology by restricting a process to every other CPU. Finally, you learned how to set affinity of both new and running processes using Task Manager, PowerShell, and the start command.

Note: this technical approach has nothing to do with control over software licensing, or licensing rights, which are sometimes linked to the number of “CPUs” or “cores.” For licensing purposes, those are legal terms, not technical terms. This post did not cover anything about software licensing or licensing rights.

If you have questions or suggestions, please comment below.

New – Application Load Balancing via IP Address to AWS & On-Premises Resources

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-application-load-balancing-via-ip-address-to-aws-on-premises-resources/

I told you about the new AWS Application Load Balancer last year and showed you how to use it to do implement Layer 7 (application) routing to EC2 instances and to microservices running in containers.

Some of our customers are building hybrid applications as part of a longer-term move to AWS. These customers have told us that they would like to use a single Application Load Balancer to spread traffic across a combination of existing on-premises resources and new resources running in the AWS Cloud. Other customers would like to spread traffic to web or database servers that are scattered across two or more Virtual Private Clouds (VPCs), host multiple services on the same instance with distinct IP addresses but a common port number, and to offer support for IP-based virtual hosting for clients that do not support Server Name Indication (SNI). Another group of customers would like to host multiple instances of a service on the same instance (perhaps within containers), while using multiple interfaces and security groups to implement fine-grained access control.

These situations arise within a broad set of hybrid, migration, disaster recovery, and on-premises use cases and scenarios.

Route to IP Addresses
In order to address these use cases, Application Load Balancers can now route traffic directly to IP addresses. These addresses can be in the same VPC as the ALB, a peer VPC in the same region, on an EC2 instance connected to a VPC by way of ClassicLink, or on on-premises resources at the other end of a VPN connection or AWS Direct Connect connection.

Application Load Balancers already group targets in to target groups. As part of today’s launch, each target group now has a target type attribute:

instance – Targets are registered by way of EC2 instance IDs, as before.

ip – Targets are registered as IP addresses. You can use any IPv4 address from the load balancer’s VPC CIDR for targets within load balancer’s VPC and any IPv4 address from the RFC 1918 ranges (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16) or the RFC 6598 range (100.64.0.0/10) for targets located outside the load balancer’s VPC (this includes Peered VPC, EC2-Classic, and on-premises targets reachable over Direct Connect or VPN).

Each target group has a load balancer and health check configuration, and publishes metrics to CloudWatch, as has always been the case.

Let’s say that you are in the transition phase of an application migration to AWS or want to use AWS to augment on-premises resources with EC2 instances and you need to distribute application traffic across both your AWS and on-premises resources. You can achieve this by registering all the resources (AWS and on-premises) to the same target group and associate the target group with a load balancer. Alternatively, you can use DNS based weighted load balancing across AWS and on-premises resources using two load balancers i.e. one load balancer for AWS and other for on-premises resources. In the scenario where application-A back-ends are in VPC and application-B back-ends are in on-premises locations then you can put back-ends for each application in different target groups and use content based routing to route traffic to each target group.

Creating a Target Group
Here’s how I create a target group that sends traffic to some IP addresses as part of the process of creating an Application Load Balancer. I enter a name (ip-target-1) and select ip as the Target type:

Then I enter IP address targets. These can be from the VPC that hosts the load balancer:

Or they can be other private IP addresses within one of the private ranges listed above, for targets outside of the VPC that hosts the load balancer:

After I review the settings and create the load balancer, traffic will be sent to the designated IP addresses as soon as they pass the health checks. Each load balancer can accommodate up to 1000 targets.

I can examine my target group and edit the set of targets at any time:

As you can see, one of my targets was not healthy when I took this screen shot (this was by design). Metrics are published to CloudWatch for each target group; I can see them in the Console and I can create CloudWatch Alarms:

Available Now
This feature is available now and you can start using it today in all AWS Regions.

Jeff;

 

AWS Hot Startups – August 2017

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-august-2017/

There’s no doubt about it – Artificial Intelligence is changing the world and how it operates. Across industries, organizations from startups to Fortune 500s are embracing AI to develop new products, services, and opportunities that are more efficient and accessible for their consumers. From driverless cars to better preventative healthcare to smart home devices, AI is driving innovation at a fast rate and will continue to play a more important role in our everyday lives.

This month we’d like to highlight startups using AI solutions to help companies grow. We are pleased to feature:

  • SignalBox – a simple and accessible deep learning platform to help businesses get started with AI.
  • Valossa – an AI video recognition platform for the media and entertainment industry.
  • Kaliber – innovative applications for businesses using facial recognition, deep learning, and big data.

SignalBox (UK)

In 2016, SignalBox founder Alain Richardt was hearing the same comments being made by developers, data scientists, and business leaders. They wanted to get into deep learning but didn’t know where to start. Alain saw an opportunity to commodify and apply deep learning by providing a platform that does the heavy lifting with an easy-to-use web interface, blueprints for common tasks, and just a single-click to productize the models. With SignalBox, companies can start building deep learning models with no coding at all – they just select a data set, choose a network architecture, and go. SignalBox also offers step-by-step tutorials, tips and tricks from industry experts, and consulting services for customers that want an end-to-end AI solution.

SignalBox offers a variety of solutions that are being used across many industries for energy modeling, fraud detection, customer segmentation, insurance risk modeling, inventory prediction, real estate prediction, and more. Existing data science teams are using SignalBox to accelerate their innovation cycle. One innovative UK startup, Energi Mine, recently worked with SignalBox to develop deep networks that predict anomalous energy consumption patterns and do time series predictions on energy usage for businesses with hundreds of sites.

SignalBox uses a variety of AWS services including Amazon EC2, Amazon VPC, Amazon Elastic Block Store, and Amazon S3. The ability to rapidly provision EC2 GPU instances has been a critical factor in their success – both in terms of keeping their operational expenses low, as well as speed to market. The Amazon API Gateway has allowed for operational automation, giving SignalBox the ability to control its infrastructure.

To learn more about SignalBox, visit here.

Valossa (Finland)

As students at the University of Oulu in Finland, the Valossa founders spent years doing research in the computer science and AI labs. During that time, the team witnessed how the world was moving beyond text, with video playing a greater role in day-to-day communication. This spawned an idea to use technology to automatically understand what an audience is viewing and share that information with a global network of content producers. Since 2015, Valossa has been building next generation AI applications to benefit the media and entertainment industry and is moving beyond the capabilities of traditional visual recognition systems.

Valossa’s AI is capable of analyzing any video stream. The AI studies a vast array of data within videos and converts that information into descriptive tags, categories, and overviews automatically. Basically, it sees, hears, and understands videos like a human does. The Valossa AI can detect people, visual and auditory concepts, key speech elements, and labels explicit content to make moderating and filtering content simpler. Valossa’s solutions are designed to provide value for the content production workflow, from media asset management to end-user applications for content discovery. AI-annotated content allows online viewers to jump directly to their favorite scenes or search specific topics and actors within a video.

Valossa leverages AWS to deliver the industry’s first complete AI video recognition platform. Using Amazon EC2 GPU instances, Valossa can easily scale their computation capacity based on customer activity. High-volume video processing with GPU instances provides the necessary speed for time-sensitive workflows. The geo-located Availability Zones in EC2 allow Valossa to bring resources close to their customers to minimize network delays. Valossa also uses Amazon S3 for video ingestion and to provide end-user video analytics, which makes managing and accessing media data easy and highly scalable.

To see how Valossa works, check out www.WhatIsMyMovie.com or enable the Alexa Skill, Valossa Movie Finder. To try the Valossa AI, sign up for free at www.valossa.com.

Kaliber (San Francisco, CA)

Serial entrepreneurs Ray Rahman and Risto Haukioja founded Kaliber in 2016. The pair had previously worked in startups building smart cities and online privacy tools, and teamed up to bring AI to the workplace and change the hospitality industry. Our world is designed to appeal to our senses – stores and warehouses have clearly marked aisles, products are colorfully packaged, and we use these designs to differentiate one thing from another. We tell each other apart by our faces, and previously that was something only humans could measure or act upon. Kaliber is using facial recognition, deep learning, and big data to create solutions for business use. Markets and companies that aren’t typically associated with cutting-edge technology will be able to use their existing camera infrastructure in a whole new way, making them more efficient and better able to serve their customers.

Computer video processing is rapidly expanding, and Kaliber believes that video recognition will extend to far more than security cameras and robots. Using the clients’ network of in-house cameras, Kaliber’s platform extracts key data points and maps them to actionable insights using their machine learning (ML) algorithm. Dashboards connect users to the client’s BI tools via the Kaliber enterprise APIs, and managers can view these analytics to improve their real-world processes, taking immediate corrective action with real-time alerts. Kaliber’s Real Metrics are aimed at combining the power of image recognition with ML to ultimately provide a more meaningful experience for all.

Kaliber uses many AWS services, including Amazon Rekognition, Amazon Kinesis, AWS Lambda, Amazon EC2 GPU instances, and Amazon S3. These services have been instrumental in helping Kaliber meet the needs of enterprise customers in record time.

Learn more about Kaliber here.

Thanks for reading and we’ll see you next month!

-Tina