Tag Archives: cloud

AWS Bill Simplification – Consolidated CloudWatch Charges

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-bill-simplification-consolidated-cloudwatch-charges/

The bill that you receive for your use of AWS in July will include a change in the way that Amazon CloudWatch charges are presented. The CloudWatch team made this change in order to make your bill simpler and easier to understand.

Consolidating Charges
In the past, charges for your usage of CloudWatch were split between two sections of your bill. For historical reasons, the charges for CloudWatch Alarms, CloudWatch Metrics, and calls to the CloudWatch API were reported in the Elastic Compute Cloud (EC2) detail section, while charges for CloudWatch Logs and CloudWatch Dashboards were reported in the CloudWatch detail section, like this:

We have received feedback that splitting the charges across two sections of the bill made it difficult to locate and understand the entire set of monitoring charges. In order to address this issue, we are moving the charges that were formerly listed in the Elastic Compute Cloud (EC2) detail section to the CloudWatch detail section. We are making the same change to the detailed billing report, moving the affected charges from the AmazonEC2 product code to the AmazonCloudWatch product code and changing to the AmazonCloudWatch product name. This change does not affect your overall bill; it simply consolidates all of the charges for the use of CloudWatch in one section.

Billing Metric
The CloudWatch billing metric named Estimated Charges can be viewed as a Total Estimated Charge, or broken down By Service:

The total will not change. However, as noted above, the charges that formerly had AmazonEC2 as the ServiceName dimension will now have it set to AmazonCloudWatch:

You may need to adjust thresholds on your billing alarms as a result:

Once again, your total AWS bill will not change. You will begin to see the consolidated charges for CloudWatch in your AWS bill for July 2017.

Jeff;

 

From Idea to Launch: Getting Your First Customers

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-get-your-first-customers/

line outside of Apple

After deciding to build an unlimited backup service and developing our own storage platform, the next step was to get customers and feedback. Not all customers are created equal. Let’s talk about the types, and when and how to attract them.

How to Get Your First Customers

First Step – Don’t Launch Publicly
Launch when you’re ready for the judgments of people who don’t know you at all. Until then, don’t launch. Sign up users and customers either that you know, those you can trust to cut you some slack (while providing you feedback), or at minimum those for whom you can set expectations. For months the Backblaze website was a single page with no ability to get the product and minimal info on what it would be. This is not to counter the Lean Startup ‘iterate quickly with customer feedback’ advice. Rather, this is an acknowledgement that there are different types of feedback required based on your development stage.

Sign Up Your Friends
We knew all of our first customers; they were friends, family, and previous co-workers. Many knew what we were up to and were excited to help us. No magic marketing or tech savviness was required to reach them – we just asked that they try the service. We asked them to provide us feedback on their experience and collected it through email and conversations. While the feedback wasn’t unbiased, it was nonetheless wide-ranging, real, and often insightful. These people were willing to spend time carefully thinking about their feedback and delving deeper into the conversations.

Broaden to Beta
Unless you’re famous or your service costs $1 million per customer, you’ll probably need to expand quickly beyond your friends to build a business – and to get broader feedback. Our next step was to broaden the customer base to beta users.

Opening up the service in beta provides three benefits:

  1. Air cover for the early warts. There are going to be issues, bugs, unnecessarily complicated user flows, and poorly worded text. Beta tells people, “We don’t consider the product ‘done’ and you should expect some of these issues. Please be patient with us.”
  2. A request for feedback. Some people always provide feedback, but beta communicates that you want it.
  3. An awareness opportunity. Opening up in beta provides an early (but not only) opportunity to have an announcement and build awareness.

Pitching Beta to Press
Not all press cares about, or is even willing to cover, beta products. Much of the mainstream press wants to write about services that are fully live, have scale, and are important in the marketplace. However, there are a number of sites that like to cover the leading edge – and that means covering betas. Techcrunch, Ars Technica, and SimpleHelp covered our initial private beta launch. I’ll go into the details of how to work with the press to cover your announcements in a post next month.

Private vs. Public Beta
Both private and public beta provide all three of the benefits above. The difference between the two is that private betas are much more controlled, whereas public ones bring in more users. But this isn’t an either/or – I recommend doing both.

Private Beta
For our original beta in 2008, we decided that we were comfortable with about 1,000 users subscribing to our service. That would provide us with a healthy amount of feedback and get some early adoption, while not overwhelming us or our server capacity, and equally important not causing cash flow issues from having to buy more equipment. So we decided to limit the sign-up to only the first 1,000 people who signed up; then we would shut off sign-ups for a while.

But how do you even get 1,000 people to sign up for your service? In our case, get some major publications to write about our beta. (Note: In a future post I’ll explain exactly how to find and reach out to writers. Sign up to receive all of the entrepreneurial posts in this series.)

Public Beta
For our original service (computer backup), we did not have a public beta; but when we launched Backblaze B2, we had a private and then a public beta. The private beta allowed us to work out early kinks, while the public beta brought us a more varied set of use cases. In public beta, there is no cap on the number of users that may try the service.

While this is a first-class problem to have, if your service is flooded and stops working, it’s still a problem. Think through what you will do if that happens. In our early days, when our system could get overwhelmed by volume, we had a static web page hosted with a different registrar that wouldn’t let customers sign up but would tell them when our service would be open again. When we reached a critical volume level we would redirect to it in order to at least provide status for when we could accept more customers.

Collect Feedback
Since one of the goals of betas is to get feedback, we made sure that we had our email addresses clearly presented on the site so users could send us thoughts. We were most interested in broad qualitative feedback on users’ experience, so all emails went to an internal mailing list that would be read by everyone at Backblaze.

For our B2 public and private betas, we also added an optional short survey to the sign-up process. In order to be considered for the private beta you had to fill the survey out, though we found that 80% of users continued to fill out the survey even when it was not required. This survey had both closed-end questions (“how much data do you have”) and open-ended ones (“what do you want to use cloud storage for?”).

BTW, despite us getting a lot of feedback now via our support team, Twitter, and marketing surveys, we are always open to more – you can email me directly at gleb.budman {at} backblaze.com.

Don’t Throw Away Users
Initially our backup service was available only on Windows, but we had an email sign-up list for people who wanted it for their Mac. This provided us with a sense of market demand and a ready list of folks who could be beta users and early adopters when we had a Mac version. Have a service targeted at doctors but lawyers are expressing interest? Capture that.

Product Launch

When
The first question is “when” to launch. Presuming your service is in ‘public beta’, what is the advantage of moving out of beta and into a “version 1.0”, “gold”, or “public availability”? That depends on your service and customer base. Some services fly through public beta. Gmail, on the other hand, was (in)famous for being in beta for 5 years, despite having over 100 million users.

The term beta says to users, “give us some leeway, but feel free to use the service”. That’s fine for many consumer apps and will have near zero impact on them. However, services aimed at businesses and government will often not be adopted with a beta label as the enterprise customers want to know the company feels the service is ‘ready’. While Backblaze started out as a purely consumer service, because it was a data backup service, it was important for customers to trust that the service was ready.

No product is bug-free. But from a product readiness perspective, the nomenclature should also be a reflection of the quality of the product. You can launch a product with one feature that works well out of beta. But a product with fifty features on which half the users will bump into problems should likely stay in beta. The customer feedback, surveys, and your own internal testing should guide you in determining this quality during the beta. Be careful about “we’ve only seen that one time” or “I haven’t been able to reproduce that on my machine”; those issues are likely to scale with customers when you launch.

How
Launching out of beta can be as simple as removing the beta label from the website/product. However, this can be a great time to reach out to press, write a blog post, and send an email announcement to your customers.

Consider thanking your beta testers somehow; can they get some feature turned out for free, an extension of their trial, or premium support? If nothing else, remember to thank them for their feedback. Users that signed up during your beta are likely the ones who will propel your service. They had the need and interest to both be early adopters and deal with bugs. They are likely the key to getting 1,000 true fans.

The Beginning
The title of this post was “Getting your first customers”, because getting to launch may feel like the peak of your journey when you’re pre-launch, but it really is just the beginning. It’s a step along the journey of building your business. If your launch is wildly successful, enjoy it, work to build on the momentum, but don’t lose track of building your business. If your launch is a dud, go out for a coffee with your team, say “well that sucks”, and then get back to building your business. You can learn a tremendous amount from your early customers, and they can become your biggest fans, but the success of your business will depend on what you continue to do the months and years after your launch.

The post From Idea to Launch: Getting Your First Customers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How to Create an AMI Builder with AWS CodeBuild and HashiCorp Packer – Part 2

Post Syndicated from Heitor Lessa original https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer-part-2/

Written by AWS Solutions Architects Jason Barto and Heitor Lessa

 
In Part 1 of this post, we described how AWS CodeBuild, AWS CodeCommit, and HashiCorp Packer can be used to build an Amazon Machine Image (AMI) from the latest version of Amazon Linux. In this post, we show how to use AWS CodePipeline, AWS CloudFormation, and Amazon CloudWatch Events to continuously ship new AMIs. We use Ansible by Red Hat to harden the OS on the AMIs through a well-known set of security controls outlined by the Center for Internet Security in its CIS Amazon Linux Benchmark.

You’ll find the source code for this post in our GitHub repo.

At the end of this post, we will have the following architecture:

Requirements

 
To follow along, you will need Git and a text editor. Make sure Git is configured to work with AWS CodeCommit, as described in Part 1.

Technologies

 
In addition to the services and products used in Part 1 of this post, we also use these AWS services and third-party software:

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Amazon CloudWatch Events enables you to react selectively to events in the cloud and in your applications. Specifically, you can create CloudWatch Events rules that match event patterns, and take actions in response to those patterns.

AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. AWS CodePipeline builds, tests, and deploys your code every time there is a code change, based on release process models you define.

Amazon SNS is a fast, flexible, fully managed push notification service that lets you send individual messages or to fan out messages to large numbers of recipients. Amazon SNS makes it simple and cost-effective to send push notifications to mobile device users or email recipients. The service can even send messages to other distributed services.

Ansible is a simple IT automation system that handles configuration management, application deployment, cloud provisioning, ad-hoc task-execution, and multinode orchestration.

Getting Started

 
We use CloudFormation to bootstrap the following infrastructure:

Component Purpose
AWS CodeCommit repository Git repository where the AMI builder code is stored.
S3 bucket Build artifact repository used by AWS CodePipeline and AWS CodeBuild.
AWS CodeBuild project Executes the AWS CodeBuild instructions contained in the build specification file.
AWS CodePipeline pipeline Orchestrates the AMI build process, triggered by new changes in the AWS CodeCommit repository.
SNS topic Notifies subscribed email addresses when an AMI build is complete.
CloudWatch Events rule Defines how the AMI builder should send a custom event to notify an SNS topic.
Region AMI Builder Launch Template
N. Virginia (us-east-1)
Ireland (eu-west-1)

After launching the CloudFormation template linked here, we will have a pipeline in the AWS CodePipeline console. (Failed at this stage simply means we don’t have any data in our newly created AWS CodeCommit Git repository.)

Next, we will clone the newly created AWS CodeCommit repository.

If this is your first time connecting to a AWS CodeCommit repository, please see instructions in our documentation on Setup steps for HTTPS Connections to AWS CodeCommit Repositories.

To clone the AWS CodeCommit repository (console)

  1. From the AWS Management Console, open the AWS CloudFormation console.
  2. Choose the AMI-Builder-Blogpost stack, and then choose Output.
  3. Make a note of the Git repository URL.
  4. Use git to clone the repository.

For example: git clone https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/AMI-Builder_repo

To clone the AWS CodeCommit repository (CLI)

# Retrieve CodeCommit repo URL
git_repo=$(aws cloudformation describe-stacks --query 'Stacks[0].Outputs[?OutputKey==`GitRepository`].OutputValue' --output text --stack-name "AMI-Builder-Blogpost")

# Clone repository locally
git clone ${git_repo}

Bootstrap the Repo with the AMI Builder Structure

 
Now that our infrastructure is ready, download all the files and templates required to build the AMI.

Your local Git repo should have the following structure:

.
├── ami_builder_event.json
├── ansible
├── buildspec.yml
├── cloudformation
├── packer_cis.json

Next, push these changes to AWS CodeCommit, and then let AWS CodePipeline orchestrate the creation of the AMI:

git add .
git commit -m "My first AMI"
git push origin master

AWS CodeBuild Implementation Details

 
While we wait for the AMI to be created, let’s see what’s changed in our AWS CodeBuild buildspec.yml file:

...
phases:
  ...
  build:
    commands:
      ...
      - ./packer build -color=false packer_cis.json | tee build.log
  post_build:
    commands:
      - egrep "${AWS_REGION}\:\sami\-" build.log | cut -d' ' -f2 > ami_id.txt
      # Packer doesn't return non-zero status; we must do that if Packer build failed
      - test -s ami_id.txt || exit 1
      - sed -i.bak "s/<<AMI-ID>>/$(cat ami_id.txt)/g" ami_builder_event.json
      - aws events put-events --entries file://ami_builder_event.json
      ...
artifacts:
  files:
    - ami_builder_event.json
    - build.log
  discard-paths: yes

In the build phase, we capture Packer output into a file named build.log. In the post_build phase, we take the following actions:

  1. Look up the AMI ID created by Packer and save its findings to a temporary file (ami_id.txt).
  2. Forcefully make AWS CodeBuild to fail if the AMI ID (ami_id.txt) is not found. This is required because Packer doesn’t fail if something goes wrong during the AMI creation process. We have to tell AWS CodeBuild to stop by informing it that an error occurred.
  3. If an AMI ID is found, we update the ami_builder_event.json file and then notify CloudWatch Events that the AMI creation process is complete.
  4. CloudWatch Events publishes a message to an SNS topic. Anyone subscribed to the topic will be notified in email that an AMI has been created.

Lastly, the new artifacts phase instructs AWS CodeBuild to upload files built during the build process (ami_builder_event.json and build.log) to the S3 bucket specified in the Outputs section of the CloudFormation template. These artifacts can then be used as an input artifact in any later stage in AWS CodePipeline.

For information about customizing the artifacts sequence of the buildspec.yml, see the Build Specification Reference for AWS CodeBuild.

CloudWatch Events Implementation Details

 
CloudWatch Events allow you to extend the AMI builder to not only send email after the AMI has been created, but to hook up any of the supported targets to react to the AMI builder event. This event publication means you can decouple from Packer actions you might take after AMI completion and plug in other actions, as you see fit.

For more information about targets in CloudWatch Events, see the CloudWatch Events API Reference.

In this case, CloudWatch Events should receive the following event, match it with a rule we created through CloudFormation, and publish a message to SNS so that you can receive an email.

Example CloudWatch custom event

[
        {
            "Source": "com.ami.builder",
            "DetailType": "AmiBuilder",
            "Detail": "{ \"AmiStatus\": \"Created\"}",
            "Resources": [ "ami-12cd5guf" ]
        }
]

Cloudwatch Events rule

{
  "detail-type": [
    "AmiBuilder"
  ],
  "source": [
    "com.ami.builder"
  ],
  "detail": {
    "AmiStatus": [
      "Created"
    ]
  }
}

Example SNS message sent in email

{
    "version": "0",
    "id": "f8bdede0-b9d7...",
    "detail-type": "AmiBuilder",
    "source": "com.ami.builder",
    "account": "<<aws_account_number>>",
    "time": "2017-04-28T17:56:40Z",
    "region": "eu-west-1",
    "resources": ["ami-112cd5guf "],
    "detail": {
        "AmiStatus": "Created"
    }
}

Packer Implementation Details

 
In addition to the build specification file, there are differences between the current version of the HashiCorp Packer template (packer_cis.json) and the one used in Part 1.

Variables

  "variables": {
    "vpc": "{{env `BUILD_VPC_ID`}}",
    "subnet": "{{env `BUILD_SUBNET_ID`}}",
         “ami_name”: “Prod-CIS-Latest-AMZN-{{isotime \”02-Jan-06 03_04_05\”}}”
  },
  • ami_name: Prefixes a name used by Packer to tag resources during the Builders sequence.
  • vpc and subnet: Environment variables defined by the CloudFormation stack parameters.

We no longer assume a default VPC is present and instead use the VPC and subnet specified in the CloudFormation parameters. CloudFormation configures the AWS CodeBuild project to use these values as environment variables. They are made available throughout the build process.

That allows for more flexibility should you need to change which VPC and subnet will be used by Packer to launch temporary resources.

Builders

  "builders": [{
    ...
    "ami_name": “{{user `ami_name`| clean_ami_name}}”,
    "tags": {
      "Name": “{{user `ami_name`}}”,
    },
    "run_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "run_volume_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "snapshot_tags": {
      "Name": “{{user `ami_name`}}",
    },
    ...
    "vpc_id": "{{user `vpc` }}",
    "subnet_id": "{{user `subnet` }}"
  }],

We now have new properties (*_tag) and a new function (clean_ami_name) and launch temporary resources in a VPC and subnet specified in the environment variables. AMI names can only contain a certain set of ASCII characters. If the input in project deviates from the expected characters (for example, includes whitespace or slashes), Packer’s clean_ami_name function will fix it.

For more information, see functions on the HashiCorp Packer website.

Provisioners

  "provisioners": [
    {
        "type": "shell",
        "inline": [
            "sudo pip install ansible"
        ]
    }, 
    {
        "type": "ansible-local",
        "playbook_file": "ansible/playbook.yaml",
        "role_paths": [
            "ansible/roles/common"
        ],
        "playbook_dir": "ansible",
        "galaxy_file": "ansible/requirements.yaml"
    },
    {
      "type": "shell",
      "inline": [
        "rm .ssh/authorized_keys ; sudo rm /root/.ssh/authorized_keys"
      ]
    }

We used shell provisioner to apply OS patches in Part 1. Now, we use shell to install Ansible on the target machine and ansible-local to import, install, and execute Ansible roles to make our target machine conform to our standards.

Packer uses shell to remove temporary keys before it creates an AMI from the target and temporary EC2 instance.

Ansible Implementation Details

 
Ansible provides OS patching through a custom Common role that can be easily customized for other tasks.

CIS Benchmark and Cloudwatch Logs are implemented through two Ansible third-party roles that are defined in ansible/requirements.yaml as seen in the Packer template.

The Ansible provisioner uses Ansible Galaxy to download these roles onto the target machine and execute them as instructed by ansible/playbook.yaml.

For information about how these components are organized, see the Playbook Roles and Include Statements in the Ansible documentation.

The following Ansible playbook (ansible</playbook.yaml) controls the execution order and custom properties:

---
- hosts: localhost
  connection: local
  gather_facts: true    # gather OS info that is made available for tasks/roles
  become: yes           # majority of CIS tasks require root
  vars:
    # CIS Controls whitepaper:  http://bit.ly/2mGAmUc
    # AWS CIS Whitepaper:       http://bit.ly/2m2Ovrh
    cis_level_1_exclusions:
    # 3.4.2 and 3.4.3 effectively blocks access to all ports to the machine
    ## This can break automation; ignoring it as there are stronger mechanisms than that
      - 3.4.2 
      - 3.4.3
    # CloudWatch Logs will be used instead of Rsyslog/Syslog-ng
    ## Same would be true if any other software doesn't support Rsyslog/Syslog-ng mechanisms
      - 4.2.1.4
      - 4.2.2.4
      - 4.2.2.5
    # Autofs is not installed in newer versions, let's ignore
      - 1.1.19
    # Cloudwatch Logs role configuration
    logs:
      - file: /var/log/messages
        group_name: "system_logs"
  roles:
    - common
    - anthcourtney.cis-amazon-linux
    - dharrisio.aws-cloudwatch-logs-agent

Both third-party Ansible roles can be easily configured through variables (vars). We use Ansible playbook variables to exclude CIS controls that don’t apply to our case and to instruct the CloudWatch Logs agent to stream the /var/log/messages log file to CloudWatch Logs.

If you need to add more OS or application logs, you can easily duplicate the playbook and make changes. The CloudWatch Logs agent will ship configured log messages to CloudWatch Logs.

For more information about parameters you can use to further customize third-party roles, download Ansible roles for the Cloudwatch Logs Agent and CIS Amazon Linux from the Galaxy website.

Committing Changes

 
Now that Ansible and CloudWatch Events are configured as a part of the build process, commiting any changes to the AWS CodeComit Git Repository will triger a new AMI build process that can be followed through the AWS CodePipeline console.

When the build is complete, an email will be sent to the email address you provided as a part of the CloudFormation stack deployment. The email serves as notification that an AMI has been built and is ready for use.

Summary

 
We used AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Packer, and Ansible to build a pipeline that continuously builds new, hardened CIS AMIs. We used Amazon SNS so that email addresses subscribed to a SNS topic are notified upon completion of the AMI build.

By treating our AMI creation process as code, we can iterate and track changes over time. In this way, it’s no different from a software development workflow. With that in mind, software patches, OS configuration, and logs that need to be shipped to a central location are only a git commit away.

Next Steps

 
Here are some ideas to extend this AMI builder:

  • Hook up a Lambda function in Cloudwatch Events to update EC2 Auto Scaling configuration upon completion of the AMI build.
  • Use AWS CodePipeline parallel steps to build multiple Packer images.
  • Add a commit ID as a tag for the AMI you created.
  • Create a scheduled Lambda function through Cloudwatch Events to clean up old AMIs based on timestamp (name or additional tag).
  • Implement Windows support for the AMI builder.
  • Create a cross-account or cross-region AMI build.

Cloudwatch Events allow the AMI builder to decouple AMI configuration and creation so that you can easily add your own logic using targets (AWS Lambda, Amazon SQS, Amazon SNS) to add events or recycle EC2 instances with the new AMI.

If you have questions or other feedback, feel free to leave it in the comments or contribute to the AMI Builder repo on GitHub.

[$] Specifying the kernel ABI

Post Syndicated from jake original https://lwn.net/Articles/726021/rss

At Open
Source Summit Japan
(OSSJ)—OSS is the new name for LinuxCon,
ContainerCon, and CloudOpen—Sasha Levin gave a talk on the kernel’s
application binary interface (ABI). There is an effort to create a kernel
ABI specification that has its genesis in a
discussion about fuzzers
at the 2016 Linux Plumbers Conference. Since
that time,
some progress on it has been made, so Levin described what the ABI is and the
benefits that would come from having a specification. He also covered
what has been done so far—and the
the extensive work remaining to be done.

Protect Web Sites & Services Using Rate-Based Rules for AWS WAF

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/protect-web-sites-services-using-rate-based-rules-for-aws-waf/

AWS WAF (Web Application Firewall) helps to protect your application from many different types of application-layer attacks that involve requests that are malicious or malformed. As I showed you when I first wrote about this service (New – AWS WAF), you can define rules that match cross-site scripting, IP address, SQL injection, size, or content constraints:

When incoming requests match rules, actions are invoked. Actions can either allow, block, or simply count matches.

The existing rule model is powerful and gives you the ability to detect and respond to many different types of attacks. It does not, however, allow you to respond to attacks that simply consist of a large number of otherwise valid requests from a particular IP address. These requests might be a web-layer DDoS attack, a brute-force login attempt, or even a partner integration gone awry.

New Rate-Based Rules
Today we are adding Rate-based Rules to WAF, giving you control of when IP addresses are added to and removed from a blacklist, along with the flexibility to handle exceptions and special cases:

Blacklisting IP Addresses – You can blacklist IP addresses that make requests at a rate that exceeds a configured threshold rate.

IP Address Tracking– You can see which IP addresses are currently blacklisted.

IP Address Removal – IP addresses that have been blacklisted are automatically removed when they no longer make requests at a rate above the configured threshold.

IP Address Exemption – You can exempt certain IP addresses from blacklisting by using an IP address whitelist inside of the a rate-based rule. For example, you might want to allow trusted partners to access your site at a higher rate.

Monitoring & Alarming – You can watch and alarm on CloudWatch metrics that are published for each rule.

You can combine new Rate-based Rules with WAF Conditions to implement sophisticated rate-limiting strategies. For example, you could use a Rate-based Rule and a WAF Condition that matches your login pages. This would allow you to impose a modest threshold on your login pages (to avoid brute-force password attacks) and allow a more generous one on your marketing or system status pages.

Thresholds are defined in terms of the number of incoming requests from a single IP address within a 5 minute period. Once this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold.

Using Rate-Based Rules
Here’s how you would define a Rate-based Rule that protects the /login portion of your site. Start by defining a WAF condition that matches the desired string in the URI of the page:

Then use this condition to define a Rate-based Rule (the rate limit is expressed in terms of requests within a 5 minute interval, but the blacklisting goes in to effect as soon as the limit is breached):

With the condition and the rule in place, create a Web ACL (ProtectLoginACL) to bring it all together and to attach it to the AWS resource (a CloudFront distribution in this case):

Then attach the rule (ProtectLogin) to the Web ACL:

The resource is now protected in accord with the rule and the web ACL. You can monitor the associated CloudWatch metrics (ProtectLogin and ProtectLoginACL in this case). You could even create CloudWatch Alarms and use them to fire Lambda functions when a protection threshold is breached. The code could examine the offending IP address and make a complex, business-driven decision, perhaps adding a whitelisting rule that gives an extra-generous allowance to a trusted partner or to a user with a special payment plan.

Available Now
The new, Rate-based Rules are available now and you can start using them today! Rate-based rules are priced the same as Regular rules; see the WAF Pricing page for more info.

Jeff;

Building Loosely Coupled, Scalable, C# Applications with Amazon SQS and Amazon SNS

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/building-loosely-coupled-scalable-c-applications-with-amazon-sqs-and-amazon-sns/

 
Stephen Liedig, Solutions Architect

 

One of the many challenges professional software architects and developers face is how to make cloud-native applications scalable, fault-tolerant, and highly available.

Fundamental to your project success is understanding the importance of making systems highly cohesive and loosely coupled. That means considering the multi-dimensional facets of system coupling to support the distributed nature of the applications that you are building for the cloud.

By that, I mean addressing not only the application-level coupling (managing incoming and outgoing dependencies), but also considering the impacts of of platform, spatial, and temporal coupling of your systems. Platform coupling relates to the interoperability, or lack thereof, of heterogeneous systems components. Spatial coupling deals with managing components at a network topology level or protocol level. Temporal, or runtime coupling, refers to the ability of a component within your system to do any kind of meaningful work while it is performing a synchronous, blocking operation.

The AWS messaging services, Amazon SQS and Amazon SNS, help you deal with these forms of coupling by providing mechanisms for:

  • Reliable, durable, and fault-tolerant delivery of messages between application components
  • Logical decomposition of systems and increased autonomy of components
  • Creating unidirectional, non-blocking operations, temporarily decoupling system components at runtime
  • Decreasing the dependencies that components have on each other through standard communication and network channels

Following on the recent topic, Building Scalable Applications and Microservices: Adding Messaging to Your Toolbox, in this post, I look at some of the ways you can introduce SQS and SNS into your architectures to decouple your components, and show how you can implement them using C#.

Walkthrough

To illustrate some of these concepts, consider a web application that processes customer orders. As good architects and developers, you have followed best practices and made your application scalable and highly available. Your solution included implementing load balancing, dynamic scaling across multiple Availability Zones, and persisting orders in a Multi-AZ Amazon RDS database instance, as in the following diagram.


In this example, the application is responsible for handling and persisting the order data, as well as dealing with increases in traffic for popular items.

One potential point of vulnerability in the order processing workflow is in saving the order in the database. The business expects that every order has been persisted into the database. However, any potential deadlock, race condition, or network issue could cause the persistence of the order to fail. Then, the order is lost with no recourse to restore the order.

With good logging capability, you may be able to identify when an error occurred and which customer’s order failed. This wouldn’t allow you to “restore” the transaction, and by that stage, your customer is no longer your customer.

As illustrated in the following diagram, introducing an SQS queue helps improve your ordering application. Using the queue isolates the processing logic into its own component and runs it in a separate process from the web application. This, in turn, allows the system to be more resilient to spikes in traffic, while allowing work to be performed only as fast as necessary in order to manage costs.


In addition, you now have a mechanism for persisting orders as messages (with the queue acting as a temporary database), and have moved the scope of your transaction with your database further down the stack. In the event of an application exception or transaction failure, this ensures that the order processing can be retired or redirected to the Amazon SQS Dead Letter Queue (DLQ), for re-processing at a later stage. (See the recent post, Using Amazon SQS Dead-Letter Queues to Control Message Failure, for more information on dead-letter queues.)

Scaling the order processing nodes

This change allows you now to scale the web application frontend independently from the processing nodes. The frontend application can continue to scale based on metrics such as CPU usage, or the number of requests hitting the load balancer. Processing nodes can scale based on the number of orders in the queue. Here is an example of scale-in and scale-out alarms that you would associate with the scaling policy.

Scale-out Alarm

aws cloudwatch put-metric-alarm --alarm-name AddCapacityToCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
--statistic Average --period 300 --threshold 3 --comparison-operator GreaterThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
--evaluation-periods 2 --alarm-actions <arn of the scale-out autoscaling policy>

Scale-in Alarm

aws cloudwatch put-metric-alarm --alarm-name RemoveCapacityFromCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
 --statistic Average --period 300 --threshold 1 --comparison-operator LessThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
 --evaluation-periods 2 --alarm-actions <arn of the scale-in autoscaling policy>

In the above example, use the ApproximateNumberOfMessagesVisible metric to discover the queue length and drive the scaling policy of the Auto Scaling group. Another useful metric is ApproximateAgeOfOldestMessage, when applications have time-sensitive messages and developers need to ensure that messages are processed within a specific time period.

Scaling the order processing implementation

On top of scaling at an infrastructure level using Auto Scaling, make sure to take advantage of the processing power of your Amazon EC2 instances by using as many of the available threads as possible. There are several ways to implement this. In this post, we build a Windows service that uses the BackgroundWorker class to process the messages from the queue.

Here’s a closer look at the implementation. In the first section of the consuming application, use a loop to continually poll the queue for new messages, and construct a ReceiveMessageRequest variable.

public static void PollQueue()
{
    while (_running)
    {
        Task<ReceiveMessageResponse> receiveMessageResponse;

        // Pull messages off the queue
        using (var sqs = new AmazonSQSClient())
        {
            const int maxMessages = 10;  // 1-10

            //Receiving a message
            var receiveMessageRequest = new ReceiveMessageRequest
            {
                // Get URL from Configuration
                QueueUrl = _queueUrl, 
                // The maximum number of messages to return. 
                // Fewer messages might be returned. 
                MaxNumberOfMessages = maxMessages, 
                // A list of attributes that need to be returned with message.
                AttributeNames = new List<string> { "All" },
                // Enable long polling. 
                // Time to wait for message to arrive on queue.
                WaitTimeSeconds = 5 
            };

            receiveMessageResponse = sqs.ReceiveMessageAsync(receiveMessageRequest);
        }

The WaitTimeSeconds property of the ReceiveMessageRequest specifies the duration (in seconds) that the call waits for a message to arrive in the queue before returning a response to the calling application. There are a few benefits to using long polling:

  • It reduces the number of empty responses by allowing SQS to wait until a message is available in the queue before sending a response.
  • It eliminates false empty responses by querying all (rather than a limited number) of the servers.
  • It returns messages as soon any message becomes available.

For more information, see Amazon SQS Long Polling.

After you have returned messages from the queue, you can start to process them by looping through each message in the response and invoking a new BackgroundWorker thread.

// Process messages
if (receiveMessageResponse.Result.Messages != null)
{
    foreach (var message in receiveMessageResponse.Result.Messages)
    {
        Console.WriteLine("Received SQS message, starting worker thread");

        // Create background worker to process message
        BackgroundWorker worker = new BackgroundWorker();
        worker.DoWork += (obj, e) => ProcessMessage(message);
        worker.RunWorkerAsync();
    }
}
else
{
    Console.WriteLine("No messages on queue");
}

The event handler, ProcessMessage, is where you implement business logic for processing orders. It is important to have a good understanding of how long a typical transaction takes so you can set a message VisibilityTimeout that is long enough to complete your operation. If order processing takes longer than the specified timeout period, the message becomes visible on the queue. Other nodes may pick it and process the same order twice, leading to unintended consequences.

Handling Duplicate Messages

In order to manage duplicate messages, seek to make your processing application idempotent. In mathematics, idempotent describes a function that produces the same result if it is applied to itself:

f(x) = f(f(x))

No matter how many times you process the same message, the end result is the same (definition from Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions, Hohpe and Wolf, 2004).

There are several strategies you could apply to achieve this:

  • Create messages that have inherent idempotent characteristics. That is, they are non-transactional in nature and are unique at a specified point in time. Rather than saying “place new order for Customer A,” which adds a duplicate order to the customer, use “place order <orderid> on <timestamp> for Customer A,” which creates a single order no matter how often it is persisted.
  • Deliver your messages via an Amazon SQS FIFO queue, which provides the benefits of message sequencing, but also mechanisms for content-based deduplication. You can deduplicate using the MessageDeduplicationId property on the SendMessage request or by enabling content-based deduplication on the queue, which generates a hash for MessageDeduplicationId, based on the content of the message, not the attributes.
var sendMessageRequest = new SendMessageRequest
{
    QueueUrl = _queueUrl,
    MessageBody = JsonConvert.SerializeObject(order),
    MessageGroupId = Guid.NewGuid().ToString("N"),
    MessageDeduplicationId = Guid.NewGuid().ToString("N")
};
  • If using SQS FIFO queues is not an option, keep a message log of all messages attributes processed for a specified period of time, as an alternative to message deduplication on the receiving end. Verifying the existence of the message in the log before processing the message adds additional computational overhead to your processing. This can be minimized through low latency persistence solutions such as Amazon DynamoDB. Bear in mind that this solution is dependent on the successful, distributed transaction of the message and the message log.

Handling exceptions

Because of the distributed nature of SQS queues, it does not automatically delete the message. Therefore, you must explicitly delete the message from the queue after processing it, using the message ReceiptHandle property (see the following code example).

However, if at any stage you have an exception, avoid handling it as you normally would. The intention is to make sure that the message ends back on the queue, so that you can gracefully deal with intermittent failures. Instead, log the exception to capture diagnostic information, and swallow it.

By not explicitly deleting the message from the queue, you can take advantage of the VisibilityTimeout behavior described earlier. Gracefully handle the message processing failure and make the unprocessed message available to other nodes to process.

In the event that subsequent retries fail, SQS automatically moves the message to the configured DLQ after the configured number of receives has been reached. You can further investigate why the order process failed. Most importantly, the order has not been lost, and your customer is still your customer.

private static void ProcessMessage(Message message)
{
    using (var sqs = new AmazonSQSClient())
    {
        try
        {
            Console.WriteLine("Processing message id: {0}", message.MessageId);

            // Implement messaging processing here
            // Ensure no downstream resource contention (parallel processing)
            // <your order processing logic in here…>
            Console.WriteLine("{0} Thread {1}: {2}", DateTime.Now.ToString("s"), Thread.CurrentThread.ManagedThreadId, message.MessageId);
            
            // Delete the message off the queue. 
            // Receipt handle is the identifier you must provide 
            // when deleting the message.
            var deleteRequest = new DeleteMessageRequest(_queueName, message.ReceiptHandle);
            sqs.DeleteMessageAsync(deleteRequest);
            Console.WriteLine("Processed message id: {0}", message.MessageId);

        }
        catch (Exception ex)
        {
            // Do nothing.
            // Swallow exception, message will return to the queue when 
            // visibility timeout has been exceeded.
            Console.WriteLine("Could not process message due to error. Exception: {0}", ex.Message);
        }
    }
}

Using SQS to adapt to changing business requirements

One of the benefits of introducing a message queue is that you can accommodate new business requirements without dramatically affecting your application.

If, for example, the business decided that all orders placed over $5000 are to be handled as a priority, you could introduce a new “priority order” queue. The way the orders are processed does not change. The only significant change to the processing application is to ensure that messages from the “priority order” queue are processed before the “standard order” queue.

The following diagram shows how this logic could be isolated in an “order dispatcher,” whose only purpose is to route order messages to the appropriate queue based on whether the order exceeds $5000. Nothing on the web application or the processing nodes changes other than the target queue to which the order is sent. The rates at which orders are processed can be achieved by modifying the poll rates and scalability settings that I have already discussed.

Extending the design pattern with Amazon SNS

Amazon SNS supports reliable publish-subscribe (pub-sub) scenarios and push notifications to known endpoints across a wide variety of protocols. It eliminates the need to periodically check or poll for new information and updates. SNS supports:

  • Reliable storage of messages for immediate or delayed processing
  • Publish / subscribe – direct, broadcast, targeted “push” messaging
  • Multiple subscriber protocols
  • Amazon SQS, HTTP, HTTPS, email, SMS, mobile push, AWS Lambda

With these capabilities, you can provide parallel asynchronous processing of orders in the system and extend it to support any number of different business use cases without affecting the production environment. This is commonly referred to as a “fanout” scenario.

Rather than your web application pushing orders to a queue for processing, send a notification via SNS. The SNS messages are sent to a topic and then replicated and pushed to multiple SQS queues and Lambda functions for processing.

As the diagram above shows, you have the development team consuming “live” data as they work on the next version of the processing application, or potentially using the messages to troubleshoot issues in production.

Marketing is consuming all order information, via a Lambda function that has subscribed to the SNS topic, inserting the records into an Amazon Redshift warehouse for analysis.

All of this, of course, is happening without affecting your order processing application.

Summary

While I haven’t dived deep into the specifics of each service, I have discussed how these services can be applied at an architectural level to build loosely coupled systems that facilitate multiple business use cases. I’ve also shown you how to use infrastructure and application-level scaling techniques, so you can get the most out of your EC2 instances.

One of the many benefits of using these managed services is how quickly and easily you can implement powerful messaging capabilities in your systems, and lower the capital and operational costs of managing your own messaging middleware.

Using Amazon SQS and Amazon SNS together can provide you with a powerful mechanism for decoupling application components. This should be part of design considerations as you architect for the cloud.

For more information, see the Amazon SQS Developer Guide and Amazon SNS Developer Guide. You’ll find tutorials on all the concepts covered in this post, and more. To can get started using the AWS console or SDK of your choice visit:

Happy messaging!

Sync vs. Backup vs. Storage

Post Syndicated from Yev original https://www.backblaze.com/blog/sync-vs-backup-vs-storage/

Cloud Sync vs. Cloud Backup vs. Cloud Storage

Google Drive recently announced their new Backup and Sync feature for Google Drive, which allows users to select folders on their computer that they want to back up to their Google Drive account (note: these files count against your Google Drive storage limit). Whenever new backup services are announced, we get a lot of questions so I thought we should take a minute to review the differences in cloud based services.

What is the Cloud? Sync Vs Backup Vs Storage

There is still a lot of confusion in the space about what exactly the “cloud” is and how different services interact with it. When folks use a syncing and sharing service like Dropbox, Box, Google Drive, OneDrive or any of the others, they often assume those are acting as a cloud backup solution as well. Adding to the confusion, cloud storage services are often the backend for backup and sync services as well as standalone services. To help sort this out, we’ll define some of the terms below as they apply to a traditional computer set-up with a bunch of apps and data.

Cloud Sync (ex. Dropbox, iCloud Drive, OneDrive, Box, Google Drive) – these services sync folders on your computer to folders on other machines or to the cloud – allowing users to work from a folder or directory across devices. Typically these services have tiered pricing, meaning you pay for the amount of data you store with the service. If there is data loss, sometimes these services even have a rollback feature, of course only files that are in the synced folders are available to be recovered.

Cloud Backup (ex. Backblaze Cloud Backup, Mozy, Carbonite) – these services work in the background automatically. The user does not need to take any action like setting up specific folders. Backup services typically back up any new or changed data on your computer to another location. Before the cloud took off, that location was primarily a CD or an external hard drive – but as cloud storage became more readily available it became the most popular storage medium. Typically these services have fixed pricing, and if there is a system crash or data loss, all backed up data is available for restore. In addition, these services have rollback features in case there is data loss / accidental file deletion.

Cloud Storage (ex. Backblaze B2, Amazon S3, Microsoft Azure) – these services are where many online backup and syncing and sharing services store data. Cloud storage providers typically serve as the endpoint for data storage. These services typically provide APIs, CLIs, and access points for individuals and developers to tie in their cloud storage offerings directly. These services are priced “per GB” meaning you pay for the amount of storage that you use. Since these services are designed for high-availability and durability, data can live solely on these services – though we still recommend having multiple copies of your data, just in case.

What Should You Use?

Backblaze strongly believes in a 3-2-1 Backup Strategy. A 3-2-1 strategy means having at least 3 total copies of your data, 2 of which are local but on different mediums (e.g. an external hard drive in addition to your computer’s local drive), and at least 1 copy offsite. The best setup is data on your computer, a copy on a hard drive that lives somewhere not inside your computer, and another copy with a cloud backup provider. Backblaze Cloud Backup is a great compliment to other services, like Time Machine, Dropbox, and even the free-tiers of cloud storage services.

What is The Difference Between Cloud Sync and Backup?

Let’s take a look at some sync setups that we see fairly frequently.

Example 1) Users have one folder on their computer that is designated for Dropbox, Google Drive, OneDrive, or one of the other syncing/sharing services. Users save or place data into those directories when they want them to appear on other devices. Often these users are using the free-tier of those syncing and sharing services and only have a few GB of data uploaded in them.

Example 2) Users are paying for extended storage for Dropbox, Google Drive, OneDrive, etc… and use those folders as the “Documents” folder – essentially working out of those directories. Files in that folder are available across devices, however, files outside of that folder (e.g. living on the computer’s desktop or anywhere else) are not synced or stored by the service.

What both examples are missing however is the backup of photos, movies, videos, and the rest of the data on their computer. That’s where cloud backup providers excel, by automatically backing up user data with little or no set-up, and no need for the dragging-and-dropping of files. Backblaze actually scans your hard drive to find all the data, regardless of where it might be hiding. The results are, all the user’s data is kept in the Backblaze cloud and the portion of the data that is synced is also kept in that provider’s cloud – giving the user another layer of redundancy. Best of all, Backblaze will actually back up your Dropbox, iCloud Drive, Google Drive, and OneDrive folders.

Data Recovery

The most important feature to think about is how easy it is to get your data back from all of these services. With sync and share services, retrieving a lot of data, especially if you are in a high-data tier, can be cumbersome and take awhile. Generally, the sync and share services only allow customers to download files over the Internet. If you are trying to download more than a couple gigabytes of data, the process can take time and can be fraught with errors.

With cloud storage services, you can usually only retrieve data over the Internet as well, and you pay for both the storage and the egress of the data, so retrieving a large amount of data can be both expensive and time consuming.

Cloud backup services will enable you to download files over the internet too and can also suffer from long download times. At Backblaze we never want our customers to feel like we’re holding their data hostage, which is why we have a lot of restore options, including our Restore Return Refund policy, which allows people to restore their data via a USB Hard Drive, and then return that drive to us for a refund. Cloud sync providers do not provide this capability.

One popular data recovery use case we’ve seen when a person has a lot of data to restore is to download just the files that are needed immediately, and then order a USB Hard Drive restore for the remaining files that are not as time sensitive. The user gets all their files back in a few days, and their network is spared the download charges.

The bottom line is that all of these services have merit for different use-cases. Have questions about which is best for you? Sound off in the comments below!

The post Sync vs. Backup vs. Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

AWS Marketplace Update – SaaS Contracts in Action

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-marketplace-update-saas-contracts-in-action/

AWS Marketplace lets AWS customers find and use products and services offered by members of the AWS Partner Network (APN). Some marketplace offerings are billed on an hourly basis, many with a cost-saving annual option designed to line up with the procurement cycles of our enterprise customers. Other offerings are available in SaaS (Software as a Service) form and are billed based on consumption units specified by the seller. The SaaS model (described in New – SaaS subscriptions on AWS Marketplace) give sellers the flexibility to bill for actual usage: number of active hosts, number of requests, GB of log files processed, and so forth.

Recently we extended the SaaS model with the addition of SaaS contracts, which my colleague Brad Lyman introduced in his post, Announcing SaaS Contracts, a Feature to Simplify SaaS Procurement on AWS Marketplace. The contracts give our customers the opportunity save money by setting up monthly subscriptions that can be expanded to cover a one, two, or three year contract term, with automatic, configurable renewals. Sellers can provide services that require up-front payment or that offer discounts in exchange for a usage commitment.

Since Brad has already covered the seller side of this powerful and flexible new model, I would like to show you what it is like to purchase a SaaS contract. Let’s say that I want to use Splunk Cloud. I simply search for it as usual:

I click on Splunk Cloud and see that it is available in SaaS Contract form:

I can also see and review the pricing options, noting that pricing varies by location, index volume, and subscription duration:

I click on Continue. Since I do not have a contract with Splunk for this software, I’ll be redirected to the vendor’s site to create one as part of the process. I choose my location, index volume, and contract duration, and opt for automatic renewal, and then click on Create Contract:

This sets up my subscription, and I need only set up my account with Splunk:

I click on Set Up Your Account and I am ready to move forward by setting up my custom URL on the Splunk site:

This feature is available now and you can start using it today.

Jeff;

 

New – Managed Device Authentication for Amazon WorkSpaces

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-managed-device-authentication-for-amazon-workspaces/

Amazon WorkSpaces allows you to access a virtual desktop in the cloud from the web and from a wide variety of desktop and mobile devices. This flexibility makes WorkSpaces ideal for environments where users have the ability to use their existing devices (often known as BYOD, or Bring Your Own Device). In these environments, organizations sometimes need the ability to manage the devices which can access WorkSpaces. For example, they may have to regulate access based on the client device operating system, version, or patch level in order to help meet compliance or security policy requirements.

Managed Device Authentication
Today we are launching device authentication for WorkSpaces. You can now use digital certificates to manage client access from Apple OSX and Microsoft Windows. You can also choose to allow or block access from iOS, Android, Chrome OS, web, and zero client devices. You can implement policies to control which device types you want to allow and which ones you want to block, with control all the way down to the patch level. Access policies are set for each WorkSpaces directory. After you have set the policies, requests to connect to WorkSpaces from a client device are assessed and either blocked or allowed. In order to make use of this feature, you will need to distribute certificates to your client devices using Microsoft System Center Configuration Manager or a mobile device management (MDM) tool.

Here’s how you set your access control options from the WorkSpaces Console:

Here’s what happens if a client is not authorized to connect:

 

Available Today
This feature is now available in all Regions where WorkSpaces is available.

Jeff;

 

Visualize and Monitor Amazon EC2 Events with Amazon CloudWatch Events and Amazon Kinesis Firehose

Post Syndicated from Karan Desai original https://aws.amazon.com/blogs/big-data/visualize-and-monitor-amazon-ec2-events-with-amazon-cloudwatch-events-and-amazon-kinesis-firehose/

Monitoring your AWS environment is important for security, performance, and cost control purposes. For example, by monitoring and analyzing API calls made to your Amazon EC2 instances, you can trace security incidents and gain insights into administrative behaviors and access patterns. The kinds of events you might monitor include console logins, Amazon EBS snapshot creation/deletion/modification, VPC creation/deletion/modification, and instance reboots, etc.

In this post, I show you how to build a near real-time API monitoring solution for EC2 events using Amazon CloudWatch Events and Amazon Kinesis Firehose. Please be sure to have Amazon CloudTrail enabled in your account.

  • CloudWatch Events offers a near real-time stream of system events that describe changes in AWS resources. CloudWatch Events now supports Kinesis Firehose as a target.
  • Kinesis Firehose is a fully managed service for continuously capturing, transforming, and delivering data in minutes to storage and analytics destinations such as Amazon S3, Amazon Kinesis Analytics, Amazon Redshift, and Amazon Elasticsearch Service.

Walkthrough

For this walkthrough, you create a CloudWatch event rule that matches specific EC2 events such as:

  • Starting, stopping, and terminating an instance
  • Creating and deleting VPC route tables
  • Creating and deleting a security group
  • Creating, deleting, and modifying instance volumes and snapshots

Your CloudWatch event target is a Kinesis Firehose delivery stream that delivers this data to an Elasticsearch cluster, where you set up Kibana for visualization. Using this solution, you can easily load and visualize EC2 events in minutes without setting up complicated data pipelines.

Set up the Elasticsearch cluster

Create the Amazon ES domain in the Amazon ES console, or by using the create-elasticsearch-domain command in the AWS CLI.

This example uses the following configuration:

  • Domain Name: esLogSearch
  • Elasticsearch Version: 1
  • Instance Count: 2
  • Instance type:elasticsearch
  • Enable dedicated master: true
  • Enable zone awareness: true
  • Restrict Amazon ES to an IP-based access policy

Other settings are left as the defaults.

Create a Kinesis Firehose delivery stream

In the Kinesis Firehose console, create a new delivery stream with Amazon ES as the destination. For detailed steps, see Create a Kinesis Firehose Delivery Stream to Amazon Elasticsearch Service.

Set up CloudWatch Events

Create a rule, and configure the event source and target. You can choose to configure multiple event sources with several AWS resources, along with options to specify specific or multiple event types.

In the CloudWatch console, choose Events.

For Service Name, choose EC2.

In Event Pattern Preview, choose Edit and copy the pattern below. For this walkthrough, I selected events that are specific to the EC2 API, but you can modify it to include events for any of your AWS resources.

 

{
	"source": [
		"aws.ec2"
	],
	"detail-type": [
		"AWS API Call via CloudTrail"
	],
	"detail": {
		"eventSource": [
			"ec2.amazonaws.com"
		],
		"eventName": [
			"RunInstances",
			"StopInstances",
			"StartInstances",
			"CreateFlowLogs",
			"CreateImage",
			"CreateNatGateway",
			"CreateVpc",
			"DeleteKeyPair",
			"DeleteNatGateway",
			"DeleteRoute",
			"DeleteRouteTable",
"CreateSnapshot",
"DeleteSnapshot",
			"DeleteVpc",
			"DeleteVpcEndpoints",
			"DeleteSecurityGroup",
			"ModifyVolume",
			"ModifyVpcEndpoint",
			"TerminateInstances"
		]
	}
}

The following screenshot shows what your event looks like in the console.

Next, choose Add target and select the delivery stream that you just created.

Set up Kibana on the Elasticsearch cluster

Amazon ES provides a default installation of Kibana with every Amazon ES domain. You can find the Kibana endpoint on your domain dashboard in the Amazon ES console. You can restrict Amazon ES access to an IP-based access policy.

In the Kibana console, for Index name or pattern, type log. This is the name of the Elasticsearch index.

For Time-field name, choose @time.

To view the events, choose Discover.

The following chart demonstrates the API operations and the number of times that they have been triggered in the past 12 hours.

Summary

In this post, you created a continuous, near real-time solution to monitor various EC2 events such as starting and shutting down instances, creating VPCs, etc. Likewise, you can build a continuous monitoring solution for all the API operations that are relevant to your daily AWS operations and resources.

With Kinesis Firehose as a new target for CloudWatch Events, you can retrieve, transform, and load system events to the storage and analytics destination of your choice in minutes, without setting up complicated data pipelines.

If you have any questions or suggestions, please comment below.


Additional Reading

Learn how to build a serverless architecture to analyze Amazon CloudFront access logs using AWS Lambda, Amazon Athena, and Amazon Kinesis Analytics

 

 

 

AWS Named as a Leader in Gartner’s Infrastructure as a Service (IaaS) Magic Quadrant for 7th Consecutive Year

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-named-as-a-leader-in-gartners-infrastructure-as-a-service-iaas-magic-quadrant-for-7th-consecutive-year/

Every product planning session at AWS revolves around customers. We do our best to listen and to learn, and to use what we hear to build the roadmaps for future development. Approximately 90% of the items on the roadmap originate with customer requests and are designed to meet specific needs and requirements that they share with us.

I strongly believe that this customer-driven innovation has helped us to secure the top-right corner of the Leaders quadrant in Gartner’s Magic Quadrant for Cloud Infrastructure as a Service (IaaS) for the 7th consecutive year, earning highest placement for ability to execute and furthest for completeness of vision:

To learn more, read the full report. It contains a lot of detail and is a great summary of the features and factors that our customers examine when choosing a cloud provider.

Jeff;

FreeNAS 11.0 is Now Here

Post Syndicated from ris original https://lwn.net/Articles/725509/rss

FreeNAS 11.0 has been released. “This
version brings new virtualization and object storage features to the
World’s Most Popular Open Source Storage Operating System. FreeNAS 11.0
adds bhyve virtual machines to its popular SAN/NAS, jails, and plugins,
letting you use host web-scale VMs on your FreeNAS box. It also gives users
S3-compatible object storage services, which turns your FreeNAS box into an
S3-compatible server, letting you avoid reliance on the cloud.
” LWN
looked at FreeNAS in February 2015.

New – Auto Scaling for Amazon DynamoDB

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-auto-scaling-for-amazon-dynamodb/

Amazon DynamoDB has more than one hundred thousand customers, spanning a wide range of industries and use cases. These customers depend on DynamoDB’s consistent performance at any scale and presence in 16 geographic regions around the world. A recent trend we’ve been observing is customers using DynamoDB to power their serverless applications. This is a good match: with DynamoDB, you don’t have to think about things like provisioning servers, performing OS and database software patching, or configuring replication across availability zones to ensure high availability – you can simply create tables and start adding data, and let DynamoDB handle the rest.

DynamoDB provides a provisioned capacity model that lets you set the amount of read and write capacity required by your applications. While this frees you from thinking about servers and enables you to change provisioning for your table with a simple API call or button click in the AWS Management Console, customers have asked us how we can make managing capacity for DynamoDB even easier.

Today we are introducing Auto Scaling for DynamoDB to help automate capacity management for your tables and global secondary indexes. You simply specify the desired target utilization and provide upper and lower bounds for read and write capacity. DynamoDB will then monitor throughput consumption using Amazon CloudWatch alarms and then will adjust provisioned capacity up or down as needed. Auto Scaling will be on by default for all new tables and indexes, and you can also configure it for existing ones.

Even if you’re not around, DynamoDB Auto Scaling will be monitoring your tables and indexes to automatically adjust throughput in response to changes in application traffic. This can make it easier to administer your DynamoDB data, help you maximize availability for your applications, and help you reduce your DynamoDB costs.

Let’s see how it works…

Using Auto Scaling
The DynamoDB Console now proposes a comfortable set of default parameters when you create a new table. You can accept them as-is or you can uncheck Use default settings and enter your own parameters:

Here’s how you enter your own parameters:

Target utilization is expressed in terms of the ratio of consumed capacity to provisioned capacity. The parameters above would allow for sufficient headroom to allow consumed capacity to double due to a burst in read or write requests (read Capacity Unit Calculations to learn more about the relationship between DynamoDB read and write operations and provisioned capacity). Changes in provisioned capacity take place in the background.

Auto Scaling in Action
In order to see this important new feature in action, I followed the directions in the Getting Started Guide. I launched a fresh EC2 instance, installed (sudo pip install boto3) and configured (aws configure) the AWS SDK for Python. Then I used the code in the Python and DynamoDB section to create and populate a table with some data, and manually configured the table for 5 units each of read and write capacity.

I took a quick break in order to have clean, straight lines for the CloudWatch metrics so that I could show the effect of Auto Scaling. Here’s what the metrics look like before I started to apply a load:

I modified the code in Step 3 to continually issue queries for random years in the range of 1920 to 2007, ran a single copy of the code, and checked the read metrics a minute or two later:

The consumed capacity is higher than the provisioned capacity, resulting in a large number of throttled reads. Time for Auto Scaling!

I returned to the console and clicked on the Capacity tab for my table. Then I clicked on Read capacity, accepted the default values, and clicked on Save:

DynamoDB created a new IAM role (DynamoDBAutoscaleRole) and a pair of CloudWatch alarms to manage the Auto Scaling of read capacity:

DynamoDB Auto Scaling will manage the thresholds for the alarms, moving them up and down as part of the scaling process. The first alarm was triggered and the table state changed to Updating while additional read capacity was provisioned:

The change was visible in the read metrics within minutes:

I started a couple of additional copies of my modified query script and watched as additional capacity was provisioned, as indicated by the red line:

I killed all of the scripts and turned my attention to other things while waiting for the scale-down alarm to trigger. Here’s what I saw when I came back:

The next morning I checked my Scaling activities and saw that the alarm had triggered several more times overnight:

This was also visible in the metrics:

Until now, you would prepare for this situation by setting your read capacity well about your expected usage, and pay for the excess capacity (the space between the blue line and the red line). Or, you might set it too low, forget to monitor it, and run out of capacity when traffic picked up. With Auto Scaling you can get the best of both worlds: an automatic response when an increase in demand suggests that more capacity is needed, and another automated response when the capacity is no longer needed.

Things to Know
DynamoDB Auto Scaling is designed to accommodate request rates that vary in a somewhat predictable, generally periodic fashion. If you need to accommodate unpredictable bursts of read activity, you should use Auto Scaling in combination with DAX (read Amazon DynamoDB Accelerator (DAX) – In-Memory Caching for Read-Intensive Workloads to learn more). Also, the AWS SDKs will detect throttled read and write requests and retry them after a suitable delay.

I mentioned the DynamoDBAutoscaleRole earlier. This role provides Auto Scaling with the privileges that it needs to have in order for it to be able to scale your tables and indexes up and down. To learn more about this role and the permissions that it uses, read Grant User Permissions for DynamoDB Auto Scaling.

Auto Scaling has complete CLI and API support, including the ability to enable and disable the Auto Scaling policies. If you have some predictable, time-bound spikes in traffic, you can programmatically disable an Auto Scaling policy, provision higher throughput for a set period of time, and then enable Auto Scaling again later.

As noted on the Limits in DynamoDB page, you can increase provisioned capacity as often as you would like and as high as you need (subject to per-account limits that we can increase on request). You can decrease capacity up to nine times per day for each table or global secondary index.

You pay for the capacity that you provision, at the regular DynamoDB prices. You can also purchase DynamoDB Reserved Capacity to further savings.

Available Now
This feature is available now in all regions and you can start using it today!

Jeff;

Manage Instances at Scale without SSH Access Using EC2 Run Command

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/manage-instances-at-scale-without-ssh-access-using-ec2-run-command/

The guest post below, written by Ananth Vaidyanathan (Senior Product Manager for EC2 Systems Manager) and Rich Urmston (Senior Director of Cloud Architecture at Pegasystems) shows you how to use EC2 Run Command to manage a large collection of EC2 instances without having to resort to SSH.

Jeff;


Enterprises often have several managed environments and thousands of Amazon EC2 instances. It’s important to manage systems securely, without the headaches of Secure Shell (SSH). Run Command, part of Amazon EC2 Systems Manager, allows you to run remote commands on instances (or groups of instances using tags) in a controlled and auditable manner. It’s been a nice added productivity boost for Pega Cloud operations, which rely daily on Run Command services.

You can control Run Command access through standard IAM roles and policies, define documents to take input parameters, control the S3 bucket used to return command output. You can also share your documents with other AWS accounts, or with the public. All in all, Run Command provides a nice set of remote management features.

Better than SSH
Here’s why Run Command is a better option than SSH and why Pegasystems has adopted it as their primary remote management tool:

Run Command Takes Less Time –  Securely connecting to an instance requires a few steps e.g. jumpboxes to connect to or IP addresses to whitelist etc. With Run Command, cloud ops engineers can invoke commands directly from their laptop, and never have to find keys or even instance IDs. Instead, system security relies on AWS auth, IAM roles and policies.

Run Command Operations are Fully Audited – With SSH, there is no real control over what they can do, nor is there an audit trail. With Run Command, every invoked operation is audited in CloudTrail, including information on the invoking user, instances on which command was run, parameters, and operation status. You have full control and ability to restrict what functions engineers can perform on a system.

Run Command has no SSH keys to Manage – Run Command leverages standard AWS credentials, API keys, and IAM policies. Through integration with a corporate auth system, engineers can interact with systems based on their corporate credentials and identity.

Run Command can Manage Multiple Systems at the Same Time – Simple tasks such as looking at the status of a Linux service or retrieving a log file across a fleet of managed instances is cumbersome using SSH. Run Command allows you to specify a list of instances by IDs or tags, and invokes your command, in parallel, across the specified fleet. This provides great leverage when troubleshooting or managing more than the smallest Pega clusters.

Run Command Makes Automating Complex Tasks Easier – Standardizing operational tasks requires detailed procedure documents or scripts describing the exact commands. Managing or deploying these scripts across the fleet is cumbersome. Run Command documents provide an easy way to encapsulate complex functions, and handle document management and access controls. When combined with AWS Lambda, documents provide a powerful automation platform to handle any complex task.

Example – Restarting a Docker Container
Here is an example of a simple document used to restart a Docker container. It takes one parameter; the name of the Docker container to restart. It uses the AWS-RunShellScript method to invoke the command. The output is collected automatically by the service and returned to the caller. For an example of the latest document schema, see Creating Systems Manager Documents.

{
  "schemaVersion":"1.2",
  "description":"Restart the specified docker container.",
  "parameters":{
    "param":{
      "type":"String",
      "description":"(Required) name of the container to restart.",
      "maxChars":1024
    }
  },
  "runtimeConfig":{
    "aws:runShellScript":{
      "properties":[
        {
          "id":"0.aws:runShellScript",
          "runCommand":[
            "docker restart {{param}}"
          ]
        }
      ]
    }
  }
}

Putting Run Command into practice at Pegasystems
The Pegasystems provisioning system sits on AWS CloudFormation, which is used to deploy and update Pega Cloud resources. Layered on top of it is the Pega Provisioning Engine, a serverless, Lambda-based service that manages a library of CloudFormation templates and Ansible playbooks.

A Configuration Management Database (CMDB) tracks all the configurations details and history of every deployment and update, and lays out its data using a hierarchical directory naming convention. The following diagram shows how the various systems are integrated:

For cloud system management, Pega operations uses a command line version called cuttysh and a graphical version based on the Pega 7 platform, called the Pega Operations Portal. Both tools allow you to browse the CMDB of deployed environments, view configuration settings, and interact with deployed EC2 instances through Run Command.

CLI Walkthrough
Here is a CLI walkthrough for looking into a customer deployment and interacting with instances using Run Command.

Launching the cuttysh tool brings you to the root of the CMDB and a list of the provisioned customers:

% cuttysh
d CUSTA
d CUSTB
d CUSTC
d CUSTD

You interact with the CMDB using standard Linux shell commands, such as cd, ls, cat, and grep. Items prefixed with s are services that have viewable properties. Items prefixed with d are navigable subdirectories in the CMDB hierarchy.

In this example, change directories into customer CUSTB’s portion of the CMDB hierarchy, and then further into a provisioned Pega environment called env1, under the Dev network. The tool displays the artifacts that are provisioned for that environment. These entries map to provisioned CloudFormation templates.

> cd CUSTB
/ROOT/CUSTB/us-east-1 > cd DEV/env1

The ls –l command shows the version of the provisioned resources. These version numbers map back to source control–managed artifacts for the CloudFormation, Ansible, and other components that compose a version of the Pega Cloud.

/ROOT/CUSTB/us-east-1/DEV/env1 > ls -l
s 1.2.5 RDSDatabase 
s 1.2.5 PegaAppTier 
s 7.2.1 Pega7 

Now, use Run Command to interact with the deployed environments. To do this, use the attach command and specify the service with which to interact. In the following example, you attach to the Pega Web Tier. Using the information in the CMDB and instance tags, the CLI finds the corresponding EC2 instances and displays some basic information about them. This deployment has three instances.

/ROOT/CUSTB/us-east-1/DEV/env1 > attach PegaWebTier
 # ID         State  Public Ip    Private Ip  Launch Time
 0 i-0cf0e84 running 52.63.216.42 10.96.15.70 2017-01-16 
 1 i-0043c1d running 53.47.191.22 10.96.15.43 2017-01-16 
 2 i-09b879e running 55.93.118.27 10.96.15.19 2017-01-16 

From here, you can use the run command to invoke Run Command documents. In the following example, you run the docker-ps document against instance 0 (the first one on the list). EC2 executes the command and returns the output to the CLI, which in turn shows it.

/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 docker-ps
. . 
CONTAINER ID IMAGE             CREATED      STATUS        NAMES
2f187cc38c1  pega-7.2         10 weeks ago  Up 8 weeks    pega-web

Using the same command and some of the other documents that have been defined, you can restart a Docker container or even pull back the contents of a file to your local system. When you get a file, Run Command also leaves a copy in an S3 bucket in case you want to pass the link along to a colleague.

/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 docker-restart pega-web
..
pega-web

/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 get-file /var/log/cfn-init-cmd.log
. . . . . 
get-file

Data has been copied locally to: /tmp/get-file/i-0563c9e/data
Data is also available in S3 at: s3://my-bucket/CUSTB/cuttysh/get-file/data

Now, leverage the Run Command ability to do more than one thing at a time. In the following example, you attach to a deployment with three running instances and want to see the uptime for each instance. Using the par (parallel) option for run, the CLI tells Run Command to execute the uptime document on all instances in parallel.

/ROOT/CUSTB/us-east-1/DEV/env1 > run par uptime
 …
Output for: i-006bdc991385c33
 20:39:12 up 15 days, 3:54, 0 users, load average: 0.42, 0.32, 0.30

Output for: i-09390dbff062618
 20:39:12 up 15 days, 3:54, 0 users, load average: 0.08, 0.19, 0.22

Output for: i-08367d0114c94f1
 20:39:12 up 15 days, 3:54, 0 users, load average: 0.36, 0.40, 0.40

Commands are complete.
/ROOT/PEGACLOUD/CUSTB/us-east-1/PROD/prod1 > 

Summary
Run Command improves productivity by giving you faster access to systems and the ability to run operations across a group of instances. Pega Cloud operations has integrated Run Command with other operational tools to provide a clean and secure method for managing systems. This greatly improves operational efficiency, and gives greater control over who can do what in managed deployments. The Pega continual improvement process regularly assesses why operators need access, and turns those operations into new Run Command documents to be added to the library. In fact, their long-term goal is to stop deploying cloud systems with SSH enabled.

If you have any questions or suggestions, please leave a comment for us!

— Ananth and Rich

Introducing the Self-Service Business Associate Addendum

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/introducing-the-self-service-business-associate-addendum/

HIPAA logo

Today, we made available a new feature in AWS Artifact (our auditing and compliance portal) that enables you to review, accept, and track the status of your Business Associate Addendum (BAA). With this new feature, you can accept the terms of a BAA online, and instantly designate an AWS account as a “HIPAA Account” for use with protected health information (PHI) under the U.S. Health Insurance Portability and Accountability Act (HIPAA). In addition, you can sign in to AWS Artifact to confirm that your account is designated as a HIPAA Account, and review the terms of the BAA for that account. If you are no longer using a designated HIPAA Account in connection with PHI, you can remove that designation using the AWS Artifact interface.

Today’s release addresses two key customer needs in particular: (1) the need to enter into a BAA quickly, and (2) the need to easily track and control whether an AWS account is designated as a HIPAA Account under a BAA.

The BAA is the first specialized industry agreement that AWS is making available online. We chose to launch with the BAA as a commitment to AWS customer organizations who are reinventing the way healthcare is researched and delivered with the cloud. Many AWS customers have great stories to tell as we work together to use technology to advance the healthcare industry.

If you already have a BAA with AWS, or if you are considering designing or migrating a new solution that will create, receive, maintain, or transmit PHI on AWS, you can use AWS Artifact to manage your HIPAA Accounts today. As with all AWS Artifact features, there are no additional fees for using AWS Artifact to review, accept, and manage BAAs online.

– Chad

Balancing Convenience and Privacy

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/privacy-vs-convenience/

balancing convenience and privacy

In early January of this year, in a conference room with a few other colleagues, we were at a point where we needed to decide how to balance convenience and privacy for our customers. The context being our team earnestly finalizing and prioritizing the launch features of our revamped Business Backup product. In the process, we introduced a piece of functionality that we call “Groups.” A Group is a mechanism that centralizes payment and simplifies management for multiple Backblaze users in a given organization or business. As with many services there were tradeoffs, but this one proved thornier than most.

The Trade-off Between Convenience and Privacy

The problem started as we considered the possibility of having a “Managed” Group. The concept is simple enough: Centralized billing is good, but there are clear use cases where a user would like to have someone act on their behalf. For instance, a business may want a System Administrator to create/manage restores on behalf of a group of employees. We have had many instances of someone from the home office ordering a hard drive restore for an employee in the field. Similarly, a Managed Service Provider (MSP) might provide, and potentially charge for, the service of creating/managing restores for their customers. In short, the idea of having an Administrator manage a defined collection of users (i.e. a Group) was compelling and added a level of convenience.

Great. It’s decided then, we need to introduce the concept of a Managed Group. And we’ll also have Unmanaged Groups. You can have infinite Groups of either kind, we’ll let the user decide!

Here’s the problem: The Managed Group feature could have easily been used for evil. For example, an overeager Administrator could restore an employee’s files, at anytime, for any reason – legitimate or nefarious. This felt wrong as we’re a backup company, not spyware company.

This is when the discussion got more interesting. By adding a convenience feature, we realized that there was potential for user privacy to be violated. As we worked through the use cases, we faced potential conflict between two of our guiding principles:

  • Make backup astonishingly easy. Whether you are a individual, family, or business (or some combination), we want to make your life easier.
  • Don’t be evil. With great data storage comes great responsibility. We are the custodians of sensitive data and take that seriously.

So how best to balance a feature that customers clearly want while enabling sane protections for all users? It was an interesting question internally – one where a fair amount of meetings, hallway conversations, and email exchanges were conducted in order to get it right.

Enabling Administration While Safeguarding Team Privacy

Management can be turned on for any Group at the time of Group Creation. As mentioned above, one Administrator can have as many Groups as desired and those Groups can be a mix of Managed and Unmanaged.

But there’s an interesting wrinkle – if Management is enabled, potential members of that Group are told that the feature is enabled before they join the Group.

Backblze for Business Group Invite

We’ve, in plain terms, disclosed what is happening before the person starts backing up. If you read that and choose to start backing up, then you have been armed with full information.

Unfortunately, life isn’t that cut and dry. What if your company selected Backblaze and insists that everyone join the Group? Sure, you were told there are Administrators. Fine, my Administrator is supposed to act in the constructive interest of the Group. But what if the Admin is, as the saying goes, “for badness”?

Our solution, while seemingly innocuous, felt like it introduced a level of transparency and auditability that made us comfortable moving forward. Before an Administrator can do a restore on a Group Member’s behalf, the Admin is presented with a pop up that looks like this:

Backblaze for Business Restore Notification

If the Admin is going to create a restore on a user’s behalf, then that user will be notified of the activity. A less than well intentioned Admin will have some reluctance if he knows the user will receive an email. Since permission for this type of activity was granted when the individual joined the Group, we do allow the Admin to proceed with the restore operation without further approval (convenience).

However, the user will get notified and can raise any questions or concerns as desired. There are no false positives, if the user gets an email, that means an Admin was going to restore data from the user’s account. In addition, because the mechanism is email, it creates an audit trail for the company. If there are users that don’t want the alerts, we recommend simply creating an email filter rule and putting them into a folder (in case some day you did want them).

Customer Adoption

The struggle for us was to strike the right balance between privacy and convenience. Specifically, we wanted to empower our users to set the mix where it is appropriate for them. In the case of Groups, it’s been interesting to see that 93% of Groups are of the “Managed” variety.

More importantly to us, we get consistently good feedback about the notification mechanisms in place. Even for organizations where one Admin may be taking a number of legitimate actions, we’re told that the notifications are appreciated in the spirit that they are intended. We’ll continue to solicit feedback and analyze usage to find ways to improve all of our features. But hearing and seeing customer satisfaction is a positive indicator that we’ve struck the appropriate balance between convenience and privacy.

The late 20th century philosopher, Judge Smails, once posited “the most important decision you can make right now is what do you stand for…? Goodness… or badness?”

We choose goodness. How do you think we did?

The post Balancing Convenience and Privacy appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

AWS GovCloud (US) Heads East – New Region in the Works for 2018

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-govcloud-us-heads-east-new-region-in-the-works-for-2018/

AWS GovCloud (US) gives AWS customers a place to host sensitive data and regulated workloads in the AWS Cloud. The first AWS GovCloud (US) Region was launched in 2011 and is located on the west coast of the US.

I’m happy to announce that we are working on a second Region that we expect to open in 2018. The upcoming AWS GovCloud (US-East) Region will provide customers with added redundancy, data durability, and resiliency, and will also provide additional options for disaster recovery.

Like the existing region, which we now call AWS GovCloud (US-West), the new region will be isolated and meet top US government compliance requirements including International Traffic in Arms Regulations (ITAR), NIST standards, Federal Risk and Authorization Management Program (FedRAMP) Moderate and High, Department of Defense Impact Levels 2-4, DFARs, IRS1075, and Criminal Justice Information Services (CJIS) requirements. Visit the GovCloud (US) page to learn more about the compliance regimes that we support.

Government agencies and the IT contactors that serve them were early adopters of AWS GovCloud (US), as were companies in regulated industries. These organizations are able to enjoy the flexibility and cost-effectiveness of public cloud while benefiting from the isolation and data protection offered by a region designed and built to meet their regulatory needs and to help them to meet their compliance requirements. Here’s a small sample from our customer base:

Federal (US) GovernmentDepartment of Veterans Affairs, General Services Administration 18F (Digital Services Delivery), NASA JPL, Defense Digital Service, United States Air Force, United States Department of Justice.

Regulated IndustriesCSRA, Talen Energy, Cobham Electronics.

SaaS and Solution ProvidersFIGmd, Blackboard, Splunk, GitHub, Motorola.

Federal, state, and local agencies that want to move their existing applications to the AWS Cloud can take advantage of the AWS Cloud Adoption Framework (CAF) offered by AWS Professional Services.

Jeff;