Tag Archives: AWS Certificate Manager

Implementing Default Directory Indexes in Amazon S3-backed Amazon CloudFront Origins Using [email protected]

Post Syndicated from Ronnie Eichler original https://aws.amazon.com/blogs/compute/implementing-default-directory-indexes-in-amazon-s3-backed-amazon-cloudfront-origins-using-lambdaedge/

With the recent launch of [email protected], it’s now possible for you to provide even more robust functionality to your static websites. Amazon CloudFront is a content distribution network service. In this post, I show how you can use [email protected] along with the CloudFront origin access identity (OAI) for Amazon S3 and still provide simple URLs (such as www.example.com/about/ instead of www.example.com/about/index.html).

Background

Amazon S3 is a great platform for hosting a static website. You don’t need to worry about managing servers or underlying infrastructure—you just publish your static to content to an S3 bucket. S3 provides a DNS name such as <bucket-name>.s3-website-<AWS-region>.amazonaws.com. Use this name for your website by creating a CNAME record in your domain’s DNS environment (or Amazon Route 53) as follows:

www.example.com -> <bucket-name>.s3-website-<AWS-region>.amazonaws.com

You can also put CloudFront in front of S3 to further scale the performance of your site and cache the content closer to your users. CloudFront can enable HTTPS-hosted sites, by either using a custom Secure Sockets Layer (SSL) certificate or a managed certificate from AWS Certificate Manager. In addition, CloudFront also offers integration with AWS WAF, a web application firewall. As you can see, it’s possible to achieve some robust functionality by using S3, CloudFront, and other managed services and not have to worry about maintaining underlying infrastructure.

One of the key concerns that you might have when implementing any type of WAF or CDN is that you want to force your users to go through the CDN. If you implement CloudFront in front of S3, you can achieve this by using an OAI. However, in order to do this, you cannot use the HTTP endpoint that is exposed by S3’s static website hosting feature. Instead, CloudFront must use the S3 REST endpoint to fetch content from your origin so that the request can be authenticated using the OAI. This presents some challenges in that the REST endpoint does not support redirection to a default index page.

CloudFront does allow you to specify a default root object (index.html), but it only works on the root of the website (such as http://www.example.com > http://www.example.com/index.html). It does not work on any subdirectory (such as http://www.example.com/about/). If you were to attempt to request this URL through CloudFront, CloudFront would do a S3 GetObject API call against a key that does not exist.

Of course, it is a bad user experience to expect users to always type index.html at the end of every URL (or even know that it should be there). Until now, there has not been an easy way to provide these simpler URLs (equivalent to the DirectoryIndex Directive in an Apache Web Server configuration) to users through CloudFront. Not if you still want to be able to restrict access to the S3 origin using an OAI. However, with the release of [email protected], you can use a JavaScript function running on the CloudFront edge nodes to look for these patterns and request the appropriate object key from the S3 origin.

Solution

In this example, you use the compute power at the CloudFront edge to inspect the request as it’s coming in from the client. Then re-write the request so that CloudFront requests a default index object (index.html in this case) for any request URI that ends in ‘/’.

When a request is made against a web server, the client specifies the object to obtain in the request. You can use this URI and apply a regular expression to it so that these URIs get resolved to a default index object before CloudFront requests the object from the origin. Use the following code:

'use strict';
exports.handler = (event, context, callback) => {
    
    // Extract the request from the CloudFront event that is sent to [email protected] 
    var request = event.Records[0].cf.request;

    // Extract the URI from the request
    var olduri = request.uri;

    // Match any '/' that occurs at the end of a URI. Replace it with a default index
    var newuri = olduri.replace(/\/$/, '\/index.html');
    
    // Log the URI as received by CloudFront and the new URI to be used to fetch from origin
    console.log("Old URI: " + olduri);
    console.log("New URI: " + newuri);
    
    // Replace the received URI with the URI that includes the index page
    request.uri = newuri;
    
    // Return to CloudFront
    return callback(null, request);

};

To get started, create an S3 bucket to be the origin for CloudFront:

Create bucket

On the other screens, you can just accept the defaults for the purposes of this walkthrough. If this were a production implementation, I would recommend enabling bucket logging and specifying an existing S3 bucket as the destination for access logs. These logs can be useful if you need to troubleshoot issues with your S3 access.

Now, put some content into your S3 bucket. For this walkthrough, create two simple webpages to demonstrate the functionality:  A page that resides at the website root, and another that is in a subdirectory.

<s3bucketname>/index.html

<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Root home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the root directory.</p>
    </body>
</html>

<s3bucketname>/subdirectory/index.html

<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>

When uploading the files into S3, you can accept the defaults. You add a bucket policy as part of the CloudFront distribution creation that allows CloudFront to access the S3 origin. You should now have an S3 bucket that looks like the following:

Root of bucket

Subdirectory in bucket

Next, create a CloudFront distribution that your users will use to access the content. Open the CloudFront console, and choose Create Distribution. For Select a delivery method for your content, under Web, choose Get Started.

On the next screen, you set up the distribution. Below are the options to configure:

  • Origin Domain Name:  Select the S3 bucket that you created earlier.
  • Restrict Bucket Access: Choose Yes.
  • Origin Access Identity: Create a new identity.
  • Grant Read Permissions on Bucket: Choose Yes, Update Bucket Policy.
  • Object Caching: Choose Customize (I am changing the behavior to avoid having CloudFront cache objects, as this could affect your ability to troubleshoot while implementing the Lambda code).
    • Minimum TTL: 0
    • Maximum TTL: 0
    • Default TTL: 0

You can accept all of the other defaults. Again, this is a proof-of-concept exercise. After you are comfortable that the CloudFront distribution is working properly with the origin and Lambda code, you can re-visit the preceding values and make changes before implementing it in production.

CloudFront distributions can take several minutes to deploy (because the changes have to propagate out to all of the edge locations). After that’s done, test the functionality of the S3-backed static website. Looking at the distribution, you can see that CloudFront assigns a domain name:

CloudFront Distribution Settings

Try to access the website using a combination of various URLs:

http://<domainname>/:  Works

› curl -v http://d3gt20ea1hllb.cloudfront.net/
*   Trying 54.192.192.214...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.214) port 80 (#0)
> GET / HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< ETag: "cb7e2634fe66c1fd395cf868087dd3b9"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: -D2FSRwzfcwyKZKFZr6DqYFkIf4t7HdGw2MkUF5sE6YFDxRJgi0R1g==
< Content-Length: 209
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:16 GMT
< Via: 1.1 6419ba8f3bd94b651d416054d9416f1e.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Root home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the root directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

This is because CloudFront is configured to request a default root object (index.html) from the origin.

http://<domainname>/subdirectory/:  Doesn’t work

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/
*   Trying 54.192.192.214...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.214) port 80 (#0)
> GET /subdirectory/ HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< ETag: "d41d8cd98f00b204e9800998ecf8427e"
< x-amz-server-side-encryption: AES256
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: Iqf0Gy8hJLiW-9tOAdSFPkL7vCWBrgm3-1ly5tBeY_izU82ftipodA==
< Content-Length: 0
< Content-Type: application/x-directory
< Last-Modified: Wed, 19 Jul 2017 19:21:24 GMT
< Via: 1.1 6419ba8f3bd94b651d416054d9416f1e.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

If you use a tool such like cURL to test this, you notice that CloudFront and S3 are returning a blank response. The reason for this is that the subdirectory does exist, but it does not resolve to an S3 object. Keep in mind that S3 is an object store, so there are no real directories. User interfaces such as the S3 console present a hierarchical view of a bucket with folders based on the presence of forward slashes, but behind the scenes the bucket is just a collection of keys that represent stored objects.

http://<domainname>/subdirectory/index.html:  Works

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/index.html
*   Trying 54.192.192.130...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.130) port 80 (#0)
> GET /subdirectory/index.html HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 20 Jul 2017 20:35:15 GMT
< ETag: "ddf87c487acf7cef9d50418f0f8f8dae"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: RefreshHit from cloudfront
< X-Amz-Cf-Id: bkh6opXdpw8pUomqG3Qr3UcjnZL8axxOH82Lh0OOcx48uJKc_Dc3Cg==
< Content-Length: 227
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:45 GMT
< Via: 1.1 3f2788d309d30f41de96da6f931d4ede.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

This request works as expected because you are referencing the object directly. Now, you implement the [email protected] function to return the default index.html page for any subdirectory. Looking at the example JavaScript code, here’s where the magic happens:

var newuri = olduri.replace(/\/$/, '\/index.html');

You are going to use a JavaScript regular expression to match any ‘/’ that occurs at the end of the URI and replace it with ‘/index.html’. This is the equivalent to what S3 does on its own with static website hosting. However, as I mentioned earlier, you can’t rely on this if you want to use a policy on the bucket to restrict it so that users must access the bucket through CloudFront. That way, all requests to the S3 bucket must be authenticated using the S3 REST API. Because of this, you implement a [email protected] function that takes any client request ending in ‘/’ and append a default ‘index.html’ to the request before requesting the object from the origin.

In the Lambda console, choose Create function. On the next screen, skip the blueprint selection and choose Author from scratch, as you’ll use the sample code provided.

Next, configure the trigger. Choosing the empty box shows a list of available triggers. Choose CloudFront and select your CloudFront distribution ID (created earlier). For this example, leave Cache Behavior as * and CloudFront Event as Origin Request. Select the Enable trigger and replicate box and choose Next.

Lambda Trigger

Next, give the function a name and a description. Then, copy and paste the following code:

'use strict';
exports.handler = (event, context, callback) => {
    
    // Extract the request from the CloudFront event that is sent to [email protected] 
    var request = event.Records[0].cf.request;

    // Extract the URI from the request
    var olduri = request.uri;

    // Match any '/' that occurs at the end of a URI. Replace it with a default index
    var newuri = olduri.replace(/\/$/, '\/index.html');
    
    // Log the URI as received by CloudFront and the new URI to be used to fetch from origin
    console.log("Old URI: " + olduri);
    console.log("New URI: " + newuri);
    
    // Replace the received URI with the URI that includes the index page
    request.uri = newuri;
    
    // Return to CloudFront
    return callback(null, request);

};

Next, define a role that grants permissions to the Lambda function. For this example, choose Create new role from template, Basic Edge Lambda permissions. This creates a new IAM role for the Lambda function and grants the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*:*:*"
            ]
        }
    ]
}

In a nutshell, these are the permissions that the function needs to create the necessary CloudWatch log group and log stream, and to put the log events so that the function is able to write logs when it executes.

After the function has been created, you can go back to the browser (or cURL) and re-run the test for the subdirectory request that failed previously:

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/
*   Trying 54.192.192.202...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.202) port 80 (#0)
> GET /subdirectory/ HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 20 Jul 2017 21:18:44 GMT
< ETag: "ddf87c487acf7cef9d50418f0f8f8dae"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: rwFN7yHE70bT9xckBpceTsAPcmaadqWB9omPBv2P6WkIfQqdjTk_4w==
< Content-Length: 227
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:45 GMT
< Via: 1.1 3572de112011f1b625bb77410b0c5cca.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

You have now configured a way for CloudFront to return a default index page for subdirectories in S3!

Summary

In this post, you used [email protected] to be able to use CloudFront with an S3 origin access identity and serve a default root object on subdirectory URLs. To find out some more about this use-case, see [email protected] integration with CloudFront in our documentation.

If you have questions or suggestions, feel free to comment below. For troubleshooting or implementation help, check out the Lambda forum.

Application Load Balancers Now Support Multiple TLS Certificates With Smart Selection Using SNI

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/

Today we’re launching support for multiple TLS/SSL certificates on Application Load Balancers (ALB) using Server Name Indication (SNI). You can now host multiple TLS secured applications, each with its own TLS certificate, behind a single load balancer. In order to use SNI, all you need to do is bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client. These new features are provided at no additional charge.

If you’re looking for a TL;DR on how to use this new feature just click here. If you’re like me and you’re a little rusty on the specifics of Transport Layer Security (TLS) then keep reading.

TLS? SSL? SNI?

People tend to use the terms SSL and TLS interchangeably even though the two are technically different. SSL technically refers to a predecessor of the TLS protocol. To keep things simple I’ll be using the term TLS for the rest of this post.

TLS is a protocol for securely transmitting data like passwords, cookies, and credit card numbers. It enables privacy, authentication, and integrity of the data being transmitted. TLS uses certificate based authentication where certificates are like ID cards for your websites. You trust the person that signed and issued the certificate, the certificate authority (CA), so you trust that the data in the certificate is correct. When a browser connects to your TLS-enabled ALB, ALB presents a certificate that contains your site’s public key, which has been cryptographically signed by a CA. This way the client can be sure it’s getting the ‘real you’ and that it’s safe to use your site’s public key to establish a secure connection.

With SNI support we’re making it easy to use more than one certificate with the same ALB. The most common reason you might want to use multiple certificates is to handle different domains with the same load balancer. It’s always been possible to use wildcard and subject-alternate-name (SAN) certificates with ALB, but these come with limitations. Wildcard certificates only work for related subdomains that match a simple pattern and while SAN certificates can support many different domains, the same certificate authority has to authenticate each one. That means you have reauthenticate and reprovision your certificate everytime you add a new domain.

One of our most frequent requests on forums, reddit, and in my e-mail inbox has been to use the Server Name Indication (SNI) extension of TLS to choose a certificate for a client. Since TLS operates at the transport layer, below HTTP, it doesn’t see the hostname requested by a client. SNI works by having the client tell the server “This is the domain I expect to get a certificate for” when it first connects. The server can then choose the correct certificate to respond to the client. All modern web browsers and a large majority of other clients support SNI. In fact, today we see SNI supported by over 99.5% of clients connecting to CloudFront.

Smart Certificate Selection on ALB

ALB’s smart certificate selection goes beyond SNI. In addition to containing a list of valid domain names, certificates also describe the type of key exchange and cryptography that the server supports, as well as the signature algorithm (SHA2, SHA1, MD5) used to sign the certificate. To establish a TLS connection, a client starts a TLS handshake by sending a “ClientHello” message that outlines the capabilities of the client: the protocol versions, extensions, cipher suites, and compression methods. Based on what an individual client supports, ALB’s smart selection algorithm chooses a certificate for the connection and sends it to the client. ALB supports both the classic RSA algorithm and the newer, hipper, and faster Elliptic-curve based ECDSA algorithm. ECDSA support among clients isn’t as prevalent as SNI, but it is supported by all modern web browsers. Since it’s faster and requires less CPU, it can be particularly useful for ultra-low latency applications and for conserving the amount of battery used by mobile applications. Since ALB can see what each client supports from the TLS handshake, you can upload both RSA and ECDSA certificates for the same domains and ALB will automatically choose the best one for each client.

Using SNI with ALB

I’ll use a few example websites like VimIsBetterThanEmacs.com and VimIsTheBest.com. I’ve purchased and hosted these domains on Amazon Route 53, and provisioned two separate certificates for them in AWS Certificate Manager (ACM). If I want to securely serve both of these sites through a single ALB, I can quickly add both certificates in the console.

First, I’ll select my load balancer in the console, go to the listeners tab, and select “view/edit certificates”.

Next, I’ll use the “+” button in the top left corner to select some certificates then I’ll click the “Add” button.

There are no more steps. If you’re not really a GUI kind of person you’ll be pleased to know that it’s also simple to add new certificates via the AWS Command Line Interface (CLI) (or SDKs).

aws elbv2 add-listener-certificates --listener-arn <listener-arn> --certificates CertificateArn=<cert-arn>

Things to know

  • ALB Access Logs now include the client’s requested hostname and the certificate ARN used. If the “hostname” field is empty (represented by a “-“) the client did not use the SNI extension in their request.
  • You can use any of your certificates in ACM or IAM.
  • You can bind multiple certificates for the same domain(s) to a secure listener. Your ALB will choose the optimal certificate based on multiple factors including the capabilities of the client.
  • If the client does not support SNI your ALB will use the default certificate (the one you specified when you created the listener).
  • There are three new ELB API calls: AddListenerCertificates, RemoveListenerCertificates, and DescribeListenerCertificates.
  • You can bind up to 25 certificates per load balancer (not counting the default certificate).
  • These new features are supported by AWS CloudFormation at launch.

You can see an example of these new features in action with a set of websites created by my colleague Jon Zobrist: https://www.exampleloadbalancer.com/.

Overall, I will personally use this feature and I’m sure a ton of AWS users will benefit from it as well. I want to thank the Elastic Load Balancing team for all their hard work in getting this into the hands of our users.

Randall

How to Configure an LDAPS Endpoint for Simple AD

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-to-configure-an-ldaps-endpoint-for-simple-ad/

Simple AD, which is powered by Samba  4, supports basic Active Directory (AD) authentication features such as users, groups, and the ability to join domains. Simple AD also includes an integrated Lightweight Directory Access Protocol (LDAP) server. LDAP is a standard application protocol for the access and management of directory information. You can use the BIND operation from Simple AD to authenticate LDAP client sessions. This makes LDAP a common choice for centralized authentication and authorization for services such as Secure Shell (SSH), client-based virtual private networks (VPNs), and many other applications. Authentication, the process of confirming the identity of a principal, typically involves the transmission of highly sensitive information such as user names and passwords. To protect this information in transit over untrusted networks, companies often require encryption as part of their information security strategy.

In this blog post, we show you how to configure an LDAPS (LDAP over SSL/TLS) encrypted endpoint for Simple AD so that you can extend Simple AD over untrusted networks. Our solution uses Elastic Load Balancing (ELB) to send decrypted LDAP traffic to HAProxy running on Amazon EC2, which then sends the traffic to Simple AD. ELB offers integrated certificate management, SSL/TLS termination, and the ability to use a scalable EC2 backend to process decrypted traffic. ELB also tightly integrates with Amazon Route 53, enabling you to use a custom domain for the LDAPS endpoint. The solution needs the intermediate HAProxy layer because ELB can direct traffic only to EC2 instances. To simplify testing and deployment, we have provided an AWS CloudFormation template to provision the ELB and HAProxy layers.

This post assumes that you have an understanding of concepts such as Amazon Virtual Private Cloud (VPC) and its components, including subnets, routing, Internet and network address translation (NAT) gateways, DNS, and security groups. You should also be familiar with launching EC2 instances and logging in to them with SSH. If needed, you should familiarize yourself with these concepts and review the solution overview and prerequisites in the next section before proceeding with the deployment.

Note: This solution is intended for use by clients requiring an LDAPS endpoint only. If your requirements extend beyond this, you should consider accessing the Simple AD servers directly or by using AWS Directory Service for Microsoft AD.

Solution overview

The following diagram and description illustrates and explains the Simple AD LDAPS environment. The CloudFormation template creates the items designated by the bracket (internal ELB load balancer and two HAProxy nodes configured in an Auto Scaling group).

Diagram of the the Simple AD LDAPS environment

Here is how the solution works, as shown in the preceding numbered diagram:

  1. The LDAP client sends an LDAPS request to ELB on TCP port 636.
  2. ELB terminates the SSL/TLS session and decrypts the traffic using a certificate. ELB sends the decrypted LDAP traffic to the EC2 instances running HAProxy on TCP port 389.
  3. The HAProxy servers forward the LDAP request to the Simple AD servers listening on TCP port 389 in a fixed Auto Scaling group configuration.
  4. The Simple AD servers send an LDAP response through the HAProxy layer to ELB. ELB encrypts the response and sends it to the client.

Note: Amazon VPC prevents a third party from intercepting traffic within the VPC. Because of this, the VPC protects the decrypted traffic between ELB and HAProxy and between HAProxy and Simple AD. The ELB encryption provides an additional layer of security for client connections and protects traffic coming from hosts outside the VPC.

Prerequisites

  1. Our approach requires an Amazon VPC with two public and two private subnets. The previous diagram illustrates the environment’s VPC requirements. If you do not yet have these components in place, follow these guidelines for setting up a sample environment:
    1. Identify a region that supports Simple AD, ELB, and NAT gateways. The NAT gateways are used with an Internet gateway to allow the HAProxy instances to access the internet to perform their required configuration. You also need to identify the two Availability Zones in that region for use by Simple AD. You will supply these Availability Zones as parameters to the CloudFormation template later in this process.
    2. Create or choose an Amazon VPC in the region you chose. In order to use Route 53 to resolve the LDAPS endpoint, make sure you enable DNS support within your VPC. Create an Internet gateway and attach it to the VPC, which will be used by the NAT gateways to access the internet.
    3. Create a route table with a default route to the Internet gateway. Create two NAT gateways, one per Availability Zone in your public subnets to provide additional resiliency across the Availability Zones. Together, the routing table, the NAT gateways, and the Internet gateway enable the HAProxy instances to access the internet.
    4. Create two private routing tables, one per Availability Zone. Create two private subnets, one per Availability Zone. The dual routing tables and subnets allow for a higher level of redundancy. Add each subnet to the routing table in the same Availability Zone. Add a default route in each routing table to the NAT gateway in the same Availability Zone. The Simple AD servers use subnets that you create.
    5. The LDAP service requires a DNS domain that resolves within your VPC and from your LDAP clients. If you do not have an existing DNS domain, follow the steps to create a private hosted zone and associate it with your VPC. To avoid encryption protocol errors, you must ensure that the DNS domain name is consistent across your Route 53 zone and in the SSL/TLS certificate (see Step 2 in the “Solution deployment” section).
  2. Make sure you have completed the Simple AD Prerequisites.
  3. We will use a self-signed certificate for ELB to perform SSL/TLS decryption. You can use a certificate issued by your preferred certificate authority or a certificate issued by AWS Certificate Manager (ACM).
    Note: To prevent unauthorized connections directly to your Simple AD servers, you can modify the Simple AD security group on port 389 to block traffic from locations outside of the Simple AD VPC. You can find the security group in the EC2 console by creating a search filter for your Simple AD directory ID. It is also important to allow the Simple AD servers to communicate with each other as shown on Simple AD Prerequisites.

Solution deployment

This solution includes five main parts:

  1. Create a Simple AD directory.
  2. Create a certificate.
  3. Create the ELB and HAProxy layers by using the supplied CloudFormation template.
  4. Create a Route 53 record.
  5. Test LDAPS access using an Amazon Linux client.

1. Create a Simple AD directory

With the prerequisites completed, you will create a Simple AD directory in your private VPC subnets:

  1. In the Directory Service console navigation pane, choose Directories and then choose Set up directory.
  2. Choose Simple AD.
    Screenshot of choosing "Simple AD"
  3. Provide the following information:
    • Directory DNS – The fully qualified domain name (FQDN) of the directory, such as corp.example.com. You will use the FQDN as part of the testing procedure.
    • NetBIOS name – The short name for the directory, such as CORP.
    • Administrator password – The password for the directory administrator. The directory creation process creates an administrator account with the user name Administrator and this password. Do not lose this password because it is nonrecoverable. You also need this password for testing LDAPS access in a later step.
    • Description – An optional description for the directory.
    • Directory Size – The size of the directory.
      Screenshot of the directory details to provide
  4. Provide the following information in the VPC Details section, and then choose Next Step:
    • VPC – Specify the VPC in which to install the directory.
    • Subnets – Choose two private subnets for the directory servers. The two subnets must be in different Availability Zones. Make a note of the VPC and subnet IDs for use as CloudFormation input parameters. In the following example, the Availability Zones are us-east-1a and us-east-1c.
      Screenshot of the VPC details to provide
  5. Review the directory information and make any necessary changes. When the information is correct, choose Create Simple AD.

It takes several minutes to create the directory. From the AWS Directory Service console , refresh the screen periodically and wait until the directory Status value changes to Active before continuing. Choose your Simple AD directory and note the two IP addresses in the DNS address section. You will enter them when you run the CloudFormation template later.

Note: Full administration of your Simple AD implementation is out of scope for this blog post. See the documentation to add users, groups, or instances to your directory. Also see the previous blog post, How to Manage Identities in Simple AD Directories.

2. Create a certificate

In the previous step, you created the Simple AD directory. Next, you will generate a self-signed SSL/TLS certificate using OpenSSL. You will use the certificate with ELB to secure the LDAPS endpoint. OpenSSL is a standard, open source library that supports a wide range of cryptographic functions, including the creation and signing of x509 certificates. You then import the certificate into ACM that is integrated with ELB.

  1. You must have a system with OpenSSL installed to complete this step. If you do not have OpenSSL, you can install it on Amazon Linux by running the command, sudo yum install openssl. If you do not have access to an Amazon Linux instance you can create one with SSH access enabled to proceed with this step. Run the command, openssl version, at the command line to see if you already have OpenSSL installed.
    [[email protected] ~]$ openssl version
    OpenSSL 1.0.1k-fips 8 Jan 2015

  2. Create a private key using the command, openssl genrsa command.
    [[email protected] tmp]$ openssl genrsa 2048 > privatekey.pem
    Generating RSA private key, 2048 bit long modulus
    ......................................................................................................................................................................+++
    ..........................+++
    e is 65537 (0x10001)

  3. Generate a certificate signing request (CSR) using the openssl req command. Provide the requested information for each field. The Common Name is the FQDN for your LDAPS endpoint (for example, ldap.corp.example.com). The Common Name must use the domain name you will later register in Route 53. You will encounter certificate errors if the names do not match.
    [[email protected] tmp]$ openssl req -new -key privatekey.pem -out server.csr
    You are about to be asked to enter information that will be incorporated into your certificate request.

  4. Use the openssl x509 command to sign the certificate. The following example uses the private key from the previous step (privatekey.pem) and the signing request (server.csr) to create a public certificate named server.crt that is valid for 365 days. This certificate must be updated within 365 days to avoid disruption of LDAPS functionality.
    [[email protected] tmp]$ openssl x509 -req -sha256 -days 365 -in server.csr -signkey privatekey.pem -out server.crt
    Signature ok
    subject=/C=XX/L=Default City/O=Default Company Ltd/CN=ldap.corp.example.com
    Getting Private key

  5. You should see three files: privatekey.pem, server.crt, and server.csr.
    [[email protected] tmp]$ ls
    privatekey.pem server.crt server.csr

    Restrict access to the private key.

    [[email protected] tmp]$ chmod 600 privatekey.pem

    Keep the private key and public certificate for later use. You can discard the signing request because you are using a self-signed certificate and not using a Certificate Authority. Always store the private key in a secure location and avoid adding it to your source code.

  6. In the ACM console, choose Import a certificate.
  7. Using your favorite Linux text editor, paste the contents of your server.crt file in the Certificate body box.
  8. Using your favorite Linux text editor, paste the contents of your privatekey.pem file in the Certificate private key box. For a self-signed certificate, you can leave the Certificate chain box blank.
  9. Choose Review and import. Confirm the information and choose Import.

3. Create the ELB and HAProxy layers by using the supplied CloudFormation template

Now that you have created your Simple AD directory and SSL/TLS certificate, you are ready to use the CloudFormation template to create the ELB and HAProxy layers.

  1. Load the supplied CloudFormation template to deploy an internal ELB and two HAProxy EC2 instances into a fixed Auto Scaling group. After you load the template, provide the following input parameters. Note: You can find the parameters relating to your Simple AD from the directory details page by choosing your Simple AD in the Directory Service console.
Input parameter Input parameter description
HAProxyInstanceSize The EC2 instance size for HAProxy servers. The default size is t2.micro and can scale up for large Simple AD environments.
MyKeyPair The SSH key pair for EC2 instances. If you do not have an existing key pair, you must create one.
VPCId The target VPC for this solution. Must be in the VPC where you deployed Simple AD and is available in your Simple AD directory details page.
SubnetId1 The Simple AD primary subnet. This information is available in your Simple AD directory details page.
SubnetId2 The Simple AD secondary subnet. This information is available in your Simple AD directory details page.
MyTrustedNetwork Trusted network Classless Inter-Domain Routing (CIDR) to allow connections to the LDAPS endpoint. For example, use the VPC CIDR to allow clients in the VPC to connect.
SimpleADPriIP The primary Simple AD Server IP. This information is available in your Simple AD directory details page.
SimpleADSecIP The secondary Simple AD Server IP. This information is available in your Simple AD directory details page.
LDAPSCertificateARN The Amazon Resource Name (ARN) for the SSL certificate. This information is available in the ACM console.
  1. Enter the input parameters and choose Next.
  2. On the Options page, accept the defaults and choose Next.
  3. On the Review page, confirm the details and choose Create. The stack will be created in approximately 5 minutes.

4. Create a Route 53 record

The next step is to create a Route 53 record in your private hosted zone so that clients can resolve your LDAPS endpoint.

  1. If you do not have an existing DNS domain for use with LDAP, create a private hosted zone and associate it with your VPC. The hosted zone name should be consistent with your Simple AD (for example, corp.example.com).
  2. When the CloudFormation stack is in CREATE_COMPLETE status, locate the value of the LDAPSURL on the Outputs tab of the stack. Copy this value for use in the next step.
  3. On the Route 53 console, choose Hosted Zones and then choose the zone you used for the Common Name box for your self-signed certificate. Choose Create Record Set and enter the following information:
    1. Name – The label of the record (such as ldap).
    2. Type – Leave as A – IPv4 address.
    3. Alias – Choose Yes.
    4. Alias Target – Paste the value of the LDAPSURL on the Outputs tab of the stack.
  4. Leave the defaults for Routing Policy and Evaluate Target Health, and choose Create.
    Screenshot of finishing the creation of the Route 53 record

5. Test LDAPS access using an Amazon Linux client

At this point, you have configured your LDAPS endpoint and now you can test it from an Amazon Linux client.

  1. Create an Amazon Linux instance with SSH access enabled to test the solution. Launch the instance into one of the public subnets in your VPC. Make sure the IP assigned to the instance is in the trusted IP range you specified in the CloudFormation parameter MyTrustedNetwork in Step 3.b.
  2. SSH into the instance and complete the following steps to verify access.
    1. Install the openldap-clients package and any required dependencies:
      sudo yum install -y openldap-clients.
    2. Add the server.crt file to the /etc/openldap/certs/ directory so that the LDAPS client will trust your SSL/TLS certificate. You can copy the file using Secure Copy (SCP) or create it using a text editor.
    3. Edit the /etc/openldap/ldap.conf file and define the environment variables BASE, URI, and TLS_CACERT.
      • The value for BASE should match the configuration of the Simple AD directory name.
      • The value for URI should match your DNS alias.
      • The value for TLS_CACERT is the path to your public certificate.

Here is an example of the contents of the file.

BASE dc=corp,dc=example,dc=com
URI ldaps://ldap.corp.example.com
TLS_CACERT /etc/openldap/certs/server.crt

To test the solution, query the directory through the LDAPS endpoint, as shown in the following command. Replace corp.example.com with your domain name and use the Administrator password that you configured with the Simple AD directory

$ ldapsearch -D "[email protected]corp.example.com" -W sAMAccountName=Administrator

You should see a response similar to the following response, which provides the directory information in LDAP Data Interchange Format (LDIF) for the administrator distinguished name (DN) from your Simple AD LDAP server.

# extended LDIF
#
# LDAPv3
# base <dc=corp,dc=example,dc=com> (default) with scope subtree
# filter: sAMAccountName=Administrator
# requesting: ALL
#

# Administrator, Users, corp.example.com
dn: CN=Administrator,CN=Users,DC=corp,DC=example,DC=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
description: Built-in account for administering the computer/domain
instanceType: 4
whenCreated: 20170721123204.0Z
uSNCreated: 3223
name: Administrator
objectGUID:: l3h0HIiKO0a/ShL4yVK/vw==
userAccountControl: 512
…

You can now use the LDAPS endpoint for directory operations and authentication within your environment. If you would like to learn more about how to interact with your LDAPS endpoint within a Linux environment, here are a few resources to get started:

Troubleshooting

If you receive an error such as the following error when issuing the ldapsearch command, there are a few things you can do to help identify issues.

ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
  • You might be able to obtain additional error details by adding the -d1 debug flag to the ldapsearch command in the previous section.
    $ ldapsearch -D "[email protected]" -W sAMAccountName=Administrator –d1

  • Verify that the parameters in ldap.conf match your configured LDAPS URI endpoint and that all parameters can be resolved by DNS. You can use the following dig command, substituting your configured endpoint DNS name.
    $ dig ldap.corp.example.com

  • Confirm that the client instance from which you are connecting is in the CIDR range of the CloudFormation parameter, MyTrustedNetwork.
  • Confirm that the path to your public SSL/TLS certificate configured in ldap.conf as TLS_CAERT is correct. You configured this in Step 5.b.3. You can check your SSL/TLS connection with the command, substituting your configured endpoint DNS name for the string after –connect.
    $ echo -n | openssl s_client -connect ldap.corp.example.com:636

  • Verify that your HAProxy instances have the status InService in the EC2 console: Choose Load Balancers under Load Balancing in the navigation pane, highlight your LDAPS load balancer, and then choose the Instances

Conclusion

You can use ELB and HAProxy to provide an LDAPS endpoint for Simple AD and transport sensitive authentication information over untrusted networks. You can explore using LDAPS to authenticate SSH users or integrate with other software solutions that support LDAP authentication. This solution’s CloudFormation template is available on GitHub.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the Directory Service forum.

– Cameron and Jeff

How to Control TLS Ciphers in Your AWS Elastic Beanstalk Application by Using AWS CloudFormation

Post Syndicated from Paco Hope original https://aws.amazon.com/blogs/security/how-to-control-tls-ciphers-in-your-aws-elastic-beanstalk-application-by-using-aws-cloudformation/

Securing data in transit is critical to the integrity of transactions on the Internet. Whether you log in to an account with your user name and password or give your credit card details to a retailer, you want your data protected as it travels across the Internet from place to place. One of the protocols in widespread use to protect data in transit is Transport Layer Security (TLS). Every time you access a URL that begins with “https” instead of just “http”, you are using a TLS-secured connection to a website.

To demonstrate that your application has a strong TLS configuration, you can use services like the one provided by SSL Labs. There are also open source, command-line-oriented TLS testing programs such as testssl.sh (which I do not cover in this post) and sslscan (which I cover later in this post). The goal of testing your TLS configuration is to provide evidence that weak cryptographic ciphers are disabled in your TLS configuration and only strong ciphers are enabled. In this blog post, I show you how to control the TLS security options for your secure load balancer in AWS CloudFormation, pass the TLS certificate and host name for your secure AWS Elastic Beanstalk application to the CloudFormation script as parameters, and then confirm that only strong TLS ciphers are enabled on the launched application by testing it with SSLLabs.

Background

In some situations, it’s not enough to simply turn on TLS with its default settings and call it done. Over the years, a number of vulnerabilities have been discovered in the TLS protocol itself with codenames such as CRIME, POODLE, and Logjam. Though some vulnerabilities were in specific implementations, such as OpenSSL, others were vulnerabilities in the Secure Sockets Layer (SSL) or TLS protocol itself.

The only way to avoid some TLS vulnerabilities is to ensure your web server uses only the latest version of TLS. Some organizations want to limit their TLS configuration to the highest possible security levels to satisfy company policies, regulatory requirements, or other information security requirements. In practice, such limitations usually mean using TLS version 1.2 (at the time of this writing, TLS 1.3 is in the works) and using only strong cryptographic ciphers. Note that forcing a high-security TLS connection in this manner limits which types of devices can connect to your web server. I address this point at the end of this post.

The default TLS configuration in most web servers is compatible with the broadest set of clients (such as web browsers, mobile devices, and point-of-sale systems). As a result, older ciphers and protocol versions are usually enabled. This is true for the Elastic Load Balancing load balancer that is created in your Elastic Beanstalk application as well as for web server software such as Apache and nginx.  For example, TLS versions 1.0 and 1.1 are enabled in addition to 1.2. The RC4 cipher is permitted, even though that cipher is too weak for the most demanding security requirements. If your application needs to prioritize the security of connections over compatibility with legacy devices, you must adjust the TLS encryption settings on your application. The solution in this post helps you make those adjustments.

Prerequisites for the solution

Before you implement this solution, you must have a few prerequisites in place:

  1. You must have a hosted zone in Amazon Route 53 where the name of the secure application will be created. I use example.com as my domain name in this post and assume that I host example.com publicly in Route 53. To learn more about creating and hosting a zone publicly in Route 53, see Working with Public Hosted Zones.
  2. You must choose a name to be associated with the secure app. In this case, I use secure.example.com as the DNS name to be associated with the secure app. This means that I’m trying to create an Elastic Beanstalk application whose URL will be https://secure.example.com/.
  3. You must have a TLS certificate hosted in AWS Certificate Manager (ACM). This certificate must be issued with the name you decided in Step 2. If you are new to ACM, see Getting Started. If you are already familiar with ACM, request a certificate and get its Amazon Resource Name (ARN).Look up the ARN for the certificate that you created by opening the ACM console. The ARN looks something like: arn:aws:acm:eu-west-1:111122223333:certificate/12345678-abcd-1234-abcd-1234abcd1234.

Implementing the solution

You can use two approaches to control the TLS ciphers used by your load balancer: one is to use a predefined protocol policy from AWS, and the other is to write your own protocol policy that lists exactly which ciphers should be enabled. There are many ciphers and options that can be set, so the appropriate AWS predefined policy is often the simplest policy to use. If you have to comply with an information security policy that requires enabling or disabling specific ciphers, you will probably find it easiest to write a custom policy listing only the ciphers that are acceptable to your requirements.

AWS released two predefined TLS policies on March 10, 2017: ELBSecurityPolicy-TLS-1-1-2017-01 and ELBSecurityPolicy-TLS-1-2-2017-01. These policies restrict TLS negotiations to TLS 1.1 and 1.2, respectively. You can find a good comparison of the ciphers that these policies enable and disable in the HTTPS listener documentation for Elastic Load Balancing. If your requirements are simply “support TLS 1.1 and later” or “support TLS 1.2 and later,” those AWS predefined cipher policies are the best place to start. If you need to control your cipher choice with a custom policy, I show you in this post which lines of the CloudFormation template to change.

Download the predefined policy CloudFormation template

Many AWS customers rely on CloudFormation to launch their AWS resources, including their Elastic Beanstalk applications. To change the ciphers and protocol versions supported on your load balancer, you must put those options in a CloudFormation template. You can store your site’s TLS certificate in ACM and create the corresponding DNS alias record in the correct zone in Route 53.

To start, download the CloudFormation template that I have provided for this blog post, or deploy the template directly in your environment. This template creates a CloudFormation stack in your default VPC that contains two resources: an Elastic Beanstalk application that deploys a standard sample PHP application, and a Route 53 record in a hosted zone. This CloudFormation template selects the AWS predefined policy called ELBSecurityPolicy-TLS-1-2-2017-01 and deploys it.

Launching the sample application from the CloudFormation console

In the CloudFormation console, choose Create Stack. You can either upload the template through your browser, or load the template into an Amazon S3 bucket and type the S3 URL in the Specify an Amazon S3 template URL box.

After you click Next, you will see that there are three parameters defined: CertificateARN, ELBHostName, and HostedDomainName. Set the CertificateARN parameter to the ARN of the certificate you want to use for your application. Set the ELBHostName parameter to the hostname part of the URL. For example, if your URL were https://secure.example.com/, the HostedDomainName parameter would be example.com and the ELBHostName parameter would be secure.

For the sample application, choose Next and then choose Create, and the CloudFormation stack will be created. For your own applications, you might need to set other options such as a database, VPC options, or Amazon SNS notifications. For more details, see AWS Elastic Beanstalk Environment Configuration. To deploy an application other than our sample PHP application, create your own application source bundle.

Launching the sample application from the command line

In addition to launching the sample application from the console, you can specify the parameters from the command line. Because the template uses parameters, you can launch multiple copies of the application, specifying different parameters for each copy. To launch the application from a Linux command line with the AWS CLI, insert the correct values for your application, as shown in the following command.

aws cloudformation create-stack --stack-name "SecureSampleApplication" \
--template-url https://<URL of your CloudFormation template in S3> \
--parameters ParameterKey=CertificateARN,ParameterValue=<Your ARN> \
ParameterKey=ELBHostName,ParameterValue=<Your Host Name> \
ParameterKey=HostedDomainName,ParameterValue=<Your Domain Name>

When that command exits, it prints the StackID of the stack it created. Save that StackID for later so that you can fetch the stack’s outputs from the command line.

Using a custom cipher specification

If you want to specify your own cipher choices, you can use the same CloudFormation template and change two lines. Let’s assume your information security policies require you to disable any ciphers that use Cipher Block Chaining (CBC) mode encryption. These ciphers are enabled in the ELBSecurityPolicy-TLS-1-2-2017-01 managed policy, so to satisfy that security requirement, you have to modify the CloudFormation template to use your own protocol policy.

In the template, locate the three lines that define the TLSHighPolicy.

- Namespace:  aws:elb:policies:TLSHighPolicy
OptionName: SSLReferencePolicy
Value:      ELBSecurityPolicy-TLS-1-2-2017-01

Change the OptionName and Value for the TLSHighPolicy. Instead of referring to the AWS predefined policy by name, explicitly list all the ciphers you want to use. Change those three lines so they look like the following.

- Namespace: aws:elb:policies:TLSHighPolicy
OptionName: SSLProtocols
Value:  Protocol-TLSv1.2,Server-Defined-Cipher-Order,ECDHE-ECDSA-AES256-GCM-SHA384,ECDHE-ECDSA-AES128-GCM-SHA256,ECDHE-RSA-AES256-GCM-SHA384,ECDHE-RSA-AES128-GCM-SHA256

This protocol policy stipulates that the load balancer should:

  • Negotiate connections using only TLS 1.2.
  • Ignore any attempts by the client (for example, the web browser or mobile device) to negotiate a weaker cipher.
  • Accept four specific, strong combinations of cipher and key exchange—and nothing else.

The protocol policy enables only TLS 1.2, strong ciphers that do not use CBC mode encryption, and strong key exchange.

Connect to the secure application

When your CloudFormation stack is in the CREATE_COMPLETED state, you will find three outputs:

  1. The public DNS name of the load balancer
  2. The secure URL that was created
  3. TestOnSSLLabs output that contains a direct link for testing your configuration

You can either enter the secure URL in a web browser (for example, https://secure.example.com/), or click the link in the Outputs to open your sample application and see the demo page. Note that you must use HTTPS—this template has disabled HTTP on port 80 and only listens with HTTPS on port 443.

If you launched your application through the command line, you can view the CloudFormation outputs using the command line as well. You need to know the StackId of the stack you launched and insert it in the following stack-name parameter.

aws cloudformation describe-stacks --stack-name "<ARN of Your Stack>" \
--query 'Stacks[0].Outputs'

Test your application over the Internet with SSLLabs

The easiest way to confirm that the load balancer is using the secure ciphers that we chose is to enter the URL of the load balancer in the form on SSL Labs’ SSL Server Test page. If you do not want the name of your load balancer to be shared publicly on SSLLabs.com, select the Do not show the results on the boards check box. After a minute or two of testing, SSLLabs gives you a detailed report of every cipher it tried and how your load balancer responded. This test simulates many devices that might connect to your website, including mobile phones, desktop web browsers, and software libraries such as Java and OpenSSL. The report tells you whether these clients would be able to connect to your application successfully.

Assuming all went well, you should receive an A grade for the sample application. The biggest contributors to the A grade are:

  • Supporting only TLS 1.2, and not TLS 1.1, TLS 1.0, or SSL 3.0
  • Supporting only strong ciphers such as AES, and not weaker ciphers such as RC4
  • Having an X.509 public key certificate issued correctly by ACM

How to test your application privately with sslscan

You might not be able to reach your Elastic Beanstalk application from the Internet because it might be in a private subnet that is only accessible internally. If you want to test the security of your load balancer’s configuration privately, you can use one of the open source command-line tools such as sslscan. You can install and run the sslscan command on any Amazon EC2 Linux instance or even from your own laptop. Be sure that the Elastic Beanstalk application you want to test will accept an HTTPS connection from your Amazon Linux EC2 instance or from your laptop.

The easiest way to get sslscan on an Amazon Linux EC2 instance is to:

  1. Enable the Extra Packages for Enterprise Linux (EPEL) repository.
  2. Run sudo yum install sslscan.
  3. After the command runs successfully, run sslscan secure.example.com to scan your application for supported ciphers.

The results are similar to Qualys’ results at SSLLabs.com, but the sslscan tool does not summarize and evaluate the results to assign a grade. It just reports whether your application accepted a connection using the cipher that it tried. You must decide for yourself whether that set of accepted connections represents the right level of security for your application. If you have been asked to build a secure load balancer that meets specific security requirements, the output from sslscan helps to show how the security of your application is configured.

The following sample output shows a small subset of the total output of the sslscan tool.

Accepted TLS12 256 bits AES256-GCM-SHA384
Accepted TLS12 256 bits AES256-SHA256
Accepted TLS12 256 bits AES256-SHA
Rejected TLS12 256 bits CAMELLIA256-SHA
Failed TLS12 256 bits PSK-AES256-CBC-SHA
Rejected TLS12 128 bits ECDHE-RSA-AES128-GCM-SHA256
Rejected TLS12 128 bits ECDHE-ECDSA-AES128-GCM-SHA256
Rejected TLS12 128 bits ECDHE-RSA-AES128-SHA256

An Accepted connection is one that was successful: the load balancer and the client were both able to use the indicated cipher. Failed and Rejected connections are connections whose load balancer would not accept the level of security that the client was requesting. As a result, the load balancer closed the connection instead of communicating insecurely. The difference between Failed and Rejected is based one whether the TLS connection was closed cleanly.

Comparing the two policies

The main difference between our custom policy and the AWS predefined policy is whether or not CBC ciphers are accepted. The test results with both policies are identical except for the results shown in the following table. The only change in the policy, and therefore the only change in the results, is that the cipher suites using CBC ciphers have been disabled.

Cipher Suite Name Encryption Algorithm Key Size (bits) ELBSecurityPolicy-TLS-1-2-2017-01 Custom Policy
ECDHE-RSA-AES256-GCM-SHA384 AESGCM 256 Enabled Enabled
ECDHE-RSA-AES256-SHA384 AES 256 Enabled Disabled
AES256-GCM-SHA384 AESGCM 256 Enabled Disabled
AES256-SHA256 AES 256 Enabled Disabled
ECDHE-RSA-AES128-GCM-SHA256 AESGCM 128 Enabled Enabled
ECDHE-RSA-AES128-SHA256 AES 128 Enabled Disabled
AES128-GCM-SHA256 AESGCM 128 Enabled Disabled
AES128-SHA256 AES 128 Enabled Disabled

Strong ciphers and compatibility

The custom policy described in the previous section prevents legacy devices and older versions of software and web browsers from connecting. The output at SSLLabs provides a list of devices and applications (such as Internet Explorer 10 on Windows 7) that cannot connect to an application that uses the TLS policy. By design, the load balancer will refuse to connect to a device that is unable to negotiate a connection at the required levels of security. Users who use legacy software and devices will see different errors, depending on which device or software they use (for example, Internet Explorer on Windows, Chrome on Android, or a legacy mobile application). The error messages will be some variation of “connection failed” because the Elastic Load Balancer closes the connection without responding to the user’s request. This behavior can be problematic for websites that must be accessible to older desktop operating systems or older mobile devices.

If you need to support legacy devices, adjust the TLSHighPolicy in the CloudFormation template. For example, if you need to support web browsers on Windows 7 systems (and you cannot enable TLS 1.2 support on those systems), you can change the policy to enable TLS 1.1. To do this, change the value of SSLReferencePolicy to ELBSecurityPolicy-TLS-1-1-2017-01.

Enabling legacy protocol versions such as TLS version 1.1 will allow older devices to connect, but then the application may not be compliant with the information security policies or business requirements that require strong ciphers.

Conclusion

Using Elastic Beanstalk, Route 53, and ACM can help you launch secure applications that are designed to not only protect data but also meet regulatory compliance requirements and your information security policies. The TLS policy, either custom or predefined, allows you to control exactly which cryptographic ciphers are enabled on your Elastic Load Balancer. The TLS test results provide you with clear evidence you can use to demonstrate compliance with security policies or requirements. The parameters in this post’s CloudFormation template also make it adaptable and reusable for multiple applications. You can use the same template to launch different applications on different secure URLs by simply changing the parameters that you pass to the template.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the CloudFormation forum.

– Paco

New – AWS Resource Tagging API

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-resource-tagging-api/

AWS customers frequently use tags to organize their Amazon EC2 instances, Amazon EBS volumes, Amazon S3 buckets, and other resources. Over the past couple of years we have been working to make tagging more useful and more powerful. For example, we have added support for tagging during Auto Scaling, the ability to use up to 50 tags per resource, console-based support for the creation of resources that share a common tag (also known as resource groups), and the option to use Config Rules to enforce the use of tags.

As customers grow to the point where they are managing thousands of resources, each with up to 50 tags, they have been looking to us for additional tooling and options to simplify their work. Today I am happy to announce that our new Resource Tagging API is now available. You can use these APIs from the AWS SDKs or via the AWS Command Line Interface (CLI). You now have programmatic access to the same resource group operations that had been accessible only from the AWS Management Console.

Recap: Console-Based Resource Group Operations
Before I get in to the specifics of the new API functions, I thought you would appreciate a fresh look at the console-based grouping and tagging model. I already have the ability to find and then tag AWS resources using a search that spans one or more regions. For example, I can select a long list of regions and then search them for my EC2 instances like this:

After I locate and select all of the desired resources, I can add a new tag key by clicking Create a new tag key and entering the desired tag key:

Then I enter a value for each instance (the new ProjectCode column):

Then I can create a resource group that contains all of the resources that are tagged with P100:

After I have created the resource group, I can locate all of the resources by clicking on the Resource Groups menu:

To learn more about this feature, read Resource Groups and Tagging for AWS.

New API for Resource Tagging
The API that we are announcing today gives you power to tag, untag, and locate resources using tags, all from your own code. With these new API functions, you are now able to operate on multiple resource types with a single set of functions.

Here are the new functions:

TagResources – Add tags to up to 20 resources at a time.

UntagResources – Remove tags from up to 20 resources at a time.

GetResources – Get a list of resources, with optional filtering by tags and/or resource types.

GetTagKeys – Get a list of all of the unique tag keys used in your account.

GetTagValues – Get all tag values for a specified tag key.

These functions support the following AWS services and resource types:

AWS Service Resource Types
Amazon CloudFront Distribution.
Amazon EC2 AMI, Customer Gateway, DHCP Option, EBS Volume, Instance, Internet Gateway, Network ACL, Network Interface, Reserved Instance, Reserved Instance Listing, Route Table, Security Group – EC2 Classic, Security Group – VPC, Snapshot, Spot Batch, Spot Instance Request, Spot Instance, Subnet, Virtual Private Gateway, VPC, VPN Connection.
Amazon ElastiCache Cluster, Snapshot.
Amazon Elastic File System Filesystem.
Amazon Elasticsearch Service Domain.
Amazon EMR Cluster.
Amazon Glacier Vault.
Amazon Inspector Assessment.
Amazon Kinesis Stream.
Amazon Machine Learning Batch Prediction, Data Source, Evaluation, ML Model.
Amazon Redshift Cluster.
Amazon Relational Database Service DB Instance, DB Option Group, DB Parameter Group, DB Security Group, DB Snapshot, DB Subnet Group, Event Subscription, Read Replica, Reserved DB Instance.
Amazon Route 53 Domain, Health Check, Hosted Zone.
Amazon S3 Bucket.
Amazon WorkSpaces WorkSpace.
AWS Certificate Manager Certificate.
AWS CloudHSM HSM.
AWS Directory Service Directory.
AWS Storage Gateway Gateway, Virtual Tape, Volume.
Elastic Load Balancing Load Balancer, Target Group.

Things to Know
Here are a couple of things to keep in mind when you build code or write scripts that use the new API functions or the CLI equivalents:

Compatibility – The older, service-specific functions remain available and you can continue to use them.

Write Permission – The new tagging API adds another layer of permission on top of existing policies that are specific to a single AWS service. For example, you will need to have access to tag:tagResources and EC2:createTags in order to add a tag to an EC2 instance.

Read Permission – You will need to have access to tag:GetResources, tag:GetTagKeys, and tag:GetTagValues in order to call functions that access tags and tag values.

Pricing – There is no charge for the use of these functions or for tags.

Available Now
The new functions are supported by the latest versions of the AWS SDKs. You can use them to tag and access resources in all commercial AWS regions.

Jeff;

 

How to Help Protect Dynamic Web Applications Against DDoS Attacks by Using Amazon CloudFront and Amazon Route 53

Post Syndicated from Holly Willey original https://aws.amazon.com/blogs/security/how-to-protect-dynamic-web-applications-against-ddos-attacks-by-using-amazon-cloudfront-and-amazon-route-53/

Using a content delivery network (CDN) such as Amazon CloudFront to cache and serve static text and images or downloadable objects such as media files and documents is a common strategy to improve webpage load times, reduce network bandwidth costs, lessen the load on web servers, and mitigate distributed denial of service (DDoS) attacks. AWS WAF is a web application firewall that can be deployed on CloudFront to help protect your application against DDoS attacks by giving you control over which traffic to allow or block by defining security rules. When users access your application, the Domain Name System (DNS) translates human-readable domain names (for example, www.example.com) to machine-readable IP addresses (for example, 192.0.2.44). A DNS service, such as Amazon Route 53, can effectively connect users’ requests to a CloudFront distribution that proxies requests for dynamic content to the infrastructure hosting your application’s endpoints.

In this blog post, I show you how to deploy CloudFront with AWS WAF and Route 53 to help protect dynamic web applications (with dynamic content such as a response to user input) against DDoS attacks. The steps shown in this post are key to implementing the overall approach described in AWS Best Practices for DDoS Resiliency and enable the built-in, managed DDoS protection service, AWS Shield.

Background

AWS hosts CloudFront and Route 53 services on a distributed network of proxy servers in data centers throughout the world called edge locations. Using the global Amazon network of edge locations for application delivery and DNS service plays an important part in building a comprehensive defense against DDoS attacks for your dynamic web applications. These web applications can benefit from the increased security and availability provided by CloudFront and Route 53 as well as improving end users’ experience by reducing latency.

The following screenshot of an Amazon.com webpage shows how static and dynamic content can compose a dynamic web application that is delivered via HTTPS protocol for the encryption of user page requests as well as the pages that are returned by a web server.

Screenshot of an Amazon.com webpage with static and dynamic content

The following map shows the global Amazon network of edge locations available to serve static content and proxy requests for dynamic content back to the origin as of the writing of this blog post. For the latest list of edge locations, see AWS Global Infrastructure.

Map showing Amazon edge locations

How AWS Shield, CloudFront, and Route 53 work to help protect against DDoS attacks

To help keep your dynamic web applications available when they are under DDoS attack, the steps in this post enable AWS Shield Standard by configuring your applications behind CloudFront and Route 53. AWS Shield Standard protects your resources from common, frequently occurring network and transport layer DDoS attacks. Attack traffic can be geographically isolated and absorbed using the capacity in edge locations close to the source. Additionally, you can configure geographical restrictions to help block attacks originating from specific countries.

The request-routing technology in CloudFront connects each client to the nearest edge location, as determined by continuously updated latency measurements. HTTP and HTTPS requests sent to CloudFront can be monitored, and access to your application resources can be controlled at edge locations using AWS WAF. Based on conditions that you specify in AWS WAF, such as the IP addresses that requests originate from or the values of query strings, traffic can be allowed, blocked, or allowed and counted for further investigation or remediation. The following diagram shows how static and dynamic web application content can originate from endpoint resources within AWS or your corporate data center. For more details, see How CloudFront Delivers Content and How CloudFront Works with Regional Edge Caches.

Route 53 DNS requests and subsequent application traffic routed through CloudFront are inspected inline. Always-on monitoring, anomaly detection, and mitigation against common infrastructure DDoS attacks such as SYN/ACK floods, UDP floods, and reflection attacks are built into both Route 53 and CloudFront. For a review of common DDoS attack vectors, see How to Help Prepare for DDoS Attacks by Reducing Your Attack Surface. When the SYN flood attack threshold is exceeded, SYN cookies are activated to avoid dropping connections from legitimate clients. Deterministic packet filtering drops malformed TCP packets and invalid DNS requests, only allowing traffic to pass that is valid for the service. Heuristics-based anomaly detection evaluates attributes such as type, source, and composition of traffic. Traffic is scored across many dimensions, and only the most suspicious traffic is dropped. This method allows you to avoid false positives while protecting application availability.

Route 53 is also designed to withstand DNS query floods, which are real DNS requests that can continue for hours and attempt to exhaust DNS server resources. Route 53 uses shuffle sharding and anycast striping to spread DNS traffic across edge locations and help protect the availability of the service.

The next four sections provide guidance about how to deploy CloudFront, Route 53, AWS WAF, and, optionally, AWS Shield Advanced.

Deploy CloudFront

To take advantage of application delivery with DDoS mitigations at the edge, start by creating a CloudFront distribution and configuring origins:

  1. Sign in to the AWS Management Console and open the CloudFront console
  2. Choose Create Distribution.
  3. On the first page of the Create Distribution Wizard, in the Web section, choose Get Started.
  4. Specify origin settings for the distribution. The following screenshot of the CloudFront console shows an example CloudFront distribution configured with an Elastic Load Balancing load balancer origin, as shown in the previous diagram. I have configured this example to set the Origin SSL Protocols to use TLSv1.2 and the Origin Protocol Policy to HTTP Only. For more information about creating an HTTPS listener for your ELB load balancer and requesting a certificate from AWS Certificate Manager (ACM), see Getting Started with Elastic Load BalancingSupported Regions, and Requiring HTTPS for Communication Between CloudFront and Your Custom Origin.
  1. Specify cache behavior settings for the distribution, as shown in the following screenshot. You can configure each URL path pattern with a set of associated cache behaviors. For dynamic web applications, set the Minimum TTL to 0 so that CloudFront will make a GET request with an If-Modified-Since header back to the origin. When CloudFront proxies traffic to the origin from edge locations and back, multiple concurrent requests for the same object are collapsed into a single request. The request is sent over a persistent connection from the edge location to the region over networks monitored by AWS. The use of a large initial TCP window size in CloudFront maximizes the available bandwidth, and TCP Fast Open (TFO) reduces latency.
  2. To ensure that all traffic to CloudFront is encrypted and to enable SSL termination from clients at global edge locations, specify Redirect HTTP to HTTPS for Viewer Protocol Policy. Moving SSL termination to CloudFront offloads computationally expensive SSL negotiation, helps mitigate SSL abuse, and reduces latency with the use of OCSP stapling and session tickets. For more information about options for serving HTTPS requests, see Choosing How CloudFront Serves HTTPS Requests. For dynamic web applications, set Allowed HTTP Methods to include all methods, set Forward Headers to All, and for Query String Forwarding and Caching, choose Forward all, cache based on all.
  1. Specify distribution settings for the distribution, as shown in the following screenshot. Enter your domain names in the Alternate Domain Names box and choose Custom SSL Certificate.
  2. Choose Create Distribution. Note the x.cloudfront.net Domain Name of the distribution. In the next section, you will configure Route 53 to route traffic to this CloudFront distribution domain name.

Configure Route 53

When you created a web distribution in the previous section, CloudFront assigned a domain name to the distribution, such as d111111abcdef8.cloudfront.net. You can use this domain name in the URLs for your content, such as: http://d111111abcdef8.cloudfront.net/logo.jpg.

Alternatively, you might prefer to use your own domain name in URLs, such as: http://example.com/logo.jpg. You can accomplish this by creating a Route 53 alias resource record set that routes dynamic web application traffic to your CloudFront distribution by using your domain name. Alias resource record sets are virtual records specific to Route 53 that are used to map alias resource record sets for your domain to your CloudFront distribution. Alias resource record sets are similar to CNAME records except there is no charge for DNS queries to Route 53 alias resource record sets mapped to AWS services. Alias resource record sets are also not visible to resolvers, and they can be created for the root domain (zone apex) as well as subdomains.

A hosted zone, similar to a DNS zone file, is a collection of records that belongs to a single parent domain name. Each hosted zone has four nonoverlapping name servers in a delegation set. If a DNS query is dropped, the client automatically retries the next name server. If you have not already registered a domain name and have not configured a hosted zone for your domain, complete these two prerequisite steps before proceeding:

After you have registered your domain name and configured your public hosted zone, follow these steps to create an alias resource record set:

  1. Sign in to the AWS Management Console and open the Route 53 console.
  2. In the navigation pane, choose Hosted Zones.
  3. Choose the name of the hosted zone for the domain that you want to use to route traffic to your CloudFront distribution.
  4. Choose Create Record Set.
  5. Specify the following values:
    • Name – Type the domain name that you want to use to route traffic to your CloudFront distribution. The default value is the name of the hosted zone. For example, if the name of the hosted zone is example.com and you want to use acme.example.com to route traffic to your distribution, type acme.
    • Type – Choose A – IPv4 address. If IPv6 is enabled for the distribution and you are creating a second resource record set, choose AAAA – IPv6 address.
    • Alias – Choose Yes.
    • Alias Target – In the CloudFront distributions section, choose the name that CloudFront assigned to the distribution when you created it.
    • Routing Policy – Accept the default value of Simple.
    • Evaluate Target Health – Accept the default value of No.
  6. Choose Create.
  7. If IPv6 is enabled for the distribution, repeat Steps 4 through 6. Specify the same settings except for the Type field, as explained in Step 5.

The following screenshot of the Route 53 console shows a Route 53 alias resource record set that is configured to map a domain name to a CloudFront distribution.

If your dynamic web application requires geo redundancy, you can use latency-based routing in Route 53 to run origin servers in different AWS regions. Route 53 is integrated with CloudFront to collect latency measurements from each edge location. With Route 53 latency-based routing, each CloudFront edge location goes to the region with the lowest latency for the origin fetch.

Enable AWS WAF

AWS WAF is a web application firewall that helps detect and mitigate web application layer DDoS attacks by inspecting traffic inline. Application layer DDoS attacks use well-formed but malicious requests to evade mitigation and consume application resources. You can define custom security rules (also called web ACLs) that contain a set of conditions, rules, and actions to block attacking traffic. After you define web ACLs, you can apply them to CloudFront distributions, and web ACLs are evaluated in the priority order you specified when you configured them. Real-time metrics and sampled web requests are provided for each web ACL.

You can configure AWS WAF whitelisting or blacklisting in conjunction with CloudFront geo restriction to prevent users in specific geographic locations from accessing your application. The AWS WAF API supports security automation such as blacklisting IP addresses that exceed request limits, which can be useful for mitigating HTTP flood attacks. Use the AWS WAF Security Automations Implementation Guide to implement rate-based blacklisting.

The following diagram shows how the (a) flow of CloudFront access logs files to an Amazon S3 bucket (b) provides the source data for the Lambda log parser function (c) to identify HTTP flood traffic and update AWS WAF web ACLs. As CloudFront receives requests on behalf of your dynamic web application, it sends access logs to an S3 bucket, triggering the Lambda log parser. The Lambda function parses CloudFront access logs to identify suspicious behavior, such as an unusual number of requests or errors, and it automatically updates your AWS WAF rules to block subsequent requests from the IP addresses in question for a predefined amount of time that you specify.

Diagram of the process

In addition to automated rate-based blacklisting to help protect against HTTP flood attacks, prebuilt AWS CloudFormation templates are available to simplify the configuration of AWS WAF for a proactive application-layer security defense. The following diagram provides an overview of CloudFormation template input into the creation of the CommonAttackProtection stack that includes AWS WAF web ACLs used to block, allow, or count requests that meet the criteria defined in each rule.

Diagram of CloudFormation template input into the creation of the CommonAttackProtection stack

To implement these application layer protections, follow the steps in Tutorial: Quickly Setting Up AWS WAF Protection Against Common Attacks. After you have created your AWS WAF web ACLs, you can assign them to your CloudFront distribution by updating the settings.

  1. Sign in to the AWS Management Console and open the CloudFront console.
  2. Choose the link under the ID column for your CloudFront distribution.
  3. Choose Edit under the General
  4. Choose your AWS WAF Web ACL from the drop-down
  5. Choose Yes, Edit.

Activate AWS Shield Advanced (optional)

Deploying CloudFront, Route 53, and AWS WAF as described in this post enables the built-in DDoS protections for your dynamic web applications that are included with AWS Shield Standard. (There is no upfront cost or charge for AWS Shield Standard beyond the normal pricing for CloudFront, Route 53, and AWS WAF.) AWS Shield Standard is designed to meet the needs of many dynamic web applications.

For dynamic web applications that have a high risk or history of frequent, complex, or high volume DDoS attacks, AWS Shield Advanced provides additional DDoS mitigation capacity, attack visibility, cost protection, and access to the AWS DDoS Response Team (DRT). For more information about AWS Shield Advanced pricing, see AWS Shield Advanced pricing. To activate advanced protection services, follow these steps:

  1. Sign in to the AWS Management Console and open the AWS WAF console.
  2. If this is your first time signing in to the AWS WAF console, choose Get started with AWS Shield Advanced. Otherwise, choose Protected resources.
  3. Choose Activate AWS Shield Advanced.
  4. Choose the resource type and resource to protect.
  5. For Name, enter a friendly name that will help you identify the AWS resources that are protected. For example, My CloudFront AWS Shield Advanced distributions.
  6. (Optional) For Web DDoS attack, select Enable. You will be prompted to associate an existing web ACL with these resources, or create a new ACL if you don’t have any yet.
  7. Choose Add DDoS protection.

Summary

In this blog post, I outline the steps to deploy CloudFront and configure Route 53 in front of your dynamic web application to leverage the global Amazon network of edge locations for DDoS resiliency. The post also provides guidance about enabling AWS WAF for application layer traffic monitoring and automated rules creation to block malicious traffic. I also cover the optional steps to activate AWS Shield Advanced, which helps build a more comprehensive defense against DDoS attacks for your dynamic web applications.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please open a new thread on the AWS WAF forum.

– Holly

AWS Week in Review – March 6, 2017

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-6-2017/

This edition includes all of our announcements, content from all of our blogs, and as much community-generated AWS content as I had time for!

Monday

March 6

Tuesday

March 7

Wednesday

March 8

Thursday

March 9

Friday

March 10

Saturday

March 11

Sunday

March 12

Jeff;

 

The Top 20 Most Viewed AWS IAM Documentation Pages in 2016

Post Syndicated from Dave Bishop original https://aws.amazon.com/blogs/security/the-top-20-most-viewed-aws-iam-documentation-pages-in-2016/

The following 20 pages were the most viewed AWS Identity and Access Management (IAM) documentation pages in 2016. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research.

  1. What Is IAM?
    IAM is a web service that helps you securely control access to AWS resources for your users. You use IAM to control who can use your AWS resources (authentication) and what resources they can use and in what ways (authorization).
  2. Creating an IAM User in Your AWS Account
    You can create one or more IAM users in your AWS account. You might create an IAM user when someone joins your organization, or when you have a new application that needs to make API calls to AWS.
  3. The IAM Console and the Sign-in Page
    This page provides information about the IAM-enabled AWS Management Console sign-in page and explains how to create a unique sign-in URL for your account.
  4. How Users Sign In to Your Account
    After you create IAM users and passwords for each, users can sign in to the AWS Management Console for your AWS account with a special URL.
  5. IAM Best Practices
    To help secure your AWS resources, follow these recommendations for IAM.
  6. IAM Policy Elements Reference
    This page describes the elements that you can use in an IAM policy. The elements are listed here in the general order you use them in a policy.
  7. Managing Access Keys for IAM Users
    Users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. To fill this need, you can create, modify, view, or rotate access keys (access key IDs and secret access keys) for IAM users.
  8. Working with Server Certificates
    Some AWS services can use server certificates that you manage with IAM or AWS Certificate Manager (ACM). In many cases, we recommend that you use ACM to provision, manage, and deploy your SSL/TLS certificates.
  9. Your AWS Account ID and Its Alias
    Learn how to find your AWS account ID number and its alias.
  10. Overview of IAM Policies
    This page provides an overview of IAM policies. A policy is a document that formally states one or more permissions.
  11. Using Multi-Factor Authentication (MFA) in AWS
    For increased security, we recommend that you configure MFA to help protect your AWS resources. MFA adds extra security because it requires users to enter a unique authentication code from an approved authentication device or SMS text message when they access AWS websites or services.
  12. Example Policies for Administering AWS Resources
    This page shows some examples of policies that control access to resources in AWS services.
  13. Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances
    Use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you use a role, you do not have to distribute long-term credentials to an EC2 instance. Instead, the role supplies temporary permissions that applications can use when they make calls to other AWS resources.
  14. IAM Roles
    An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it.
  15. Enabling a Virtual MFA Device
    A virtual MFA device uses a software application to generate a six-digit authentication code that is compatible with the time-based one-time password (TOTP) standard, as described in RFC 6238. The app can run on mobile hardware devices, including smartphones.
  16. Creating Your First IAM Admin User and Group
    This procedure describes how to create an IAM group named Administrators, grant the group full permissions for all AWS services, and then create an administrative IAM user for yourself by adding the user to the Administrators group.
  17. Using Instance Profiles
    An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts.
  18. Working with Server Certificates
    After you obtain or create a server certificate, you upload it to IAM so that other AWS services can use it. You might also need to get certificate information, rename or delete a certificate, or perform other management tasks.
  19. Temporary Security Credentials
    You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use.
  20. Setting an Account Password Policy for IAM Users
    You can set a password policy on your AWS account to specify complexity requirements and mandatory rotation periods for your IAM users’ passwords.

In the “Comments” section below, let us know if you would like to see anything on these or other IAM documentation pages expanded or updated to make the documentation more useful for you.

– Dave

Now Open – AWS London Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-london-region/

Last week we launched our 15th AWS Region and today we are launching our 16th. We have expanded the AWS footprint into the United Kingdom with a new Region in London, our third in Europe. AWS customers can use the new London Region to better serve end-users in the United Kingdom and can also use it to store data in the UK.

The Details
The new London Region provides a broad suite of AWS services including Amazon CloudWatch, Amazon DynamoDB, Amazon ECS, Amazon ElastiCache, Amazon Elastic Block Store (EBS), Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon EMR, Amazon Glacier, Amazon Kinesis Streams, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon Virtual Private Cloud, Auto Scaling, AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Elastic Beanstalk, AWS Snowball, AWS Snowmobile, AWS Key Management Service (KMS), AWS Marketplace, AWS OpsWorks, AWS Personal Health Dashboard, AWS Shield Standard, AWS Storage Gateway, AWS Support API, Elastic Load Balancing, VM Import/Export, Amazon CloudFront, Amazon Route 53, AWS WAF, AWS Trusted Advisor, and AWS Direct Connect (follow the links for pricing and other information).

The London Region supports all sizes of C4, D2, M4, T2, and X1 instances.

Check out the AWS Global Infrastructure page to learn more about current and future AWS Regions.

From Our Customers
Many AWS customers are getting ready to use this new Region. Here’s a very small sample:

Trainline is Europe’s number one independent rail ticket retailer. Every day more than 100,000 people travel using tickets bought from Trainline. Here’s what Mark Holt (CTO of Trainline) shared with us:

We recently completed the migration of 100 percent of our eCommerce infrastructure to AWS and have seen awesome results: improved security, 60 percent less downtime, significant cost savings and incredible improvements in agility. From extensive testing, we know that 0.3s of latency is worth more than 8 million pounds and so, while AWS connectivity is already blazingly fast, we expect that serving our UK customers from UK datacenters should lead to significant top-line benefits.

Kainos Evolve Electronic Medical Records (EMR) automates the creation, capture and handling of medical case notes and operational documents and records, allowing healthcare providers to deliver better patient safety and quality of care for several leading NHS Foundation Trusts and market leading healthcare technology companies.

Travis Perkins, the largest supplier of building materials in the UK, is implementing the biggest systems and business change in its history including the migration of its datacenters to AWS.

Just Eat is the world’s leading marketplace for online food delivery. Using AWS, JustEat has been able to experiment faster and reduce the time to roll out new feature updates.

OakNorth, a new bank focused on lending between £1m-£20m to entrepreneurs and growth businesses, became the UK’s first cloud-based bank in May after several months of working with AWS to drive the development forward with the regulator.

Partners
I’m happy to report that we are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in the United Kingdom. Here’s a partial list:

  • AWS Premier Consulting Partners – Accenture, Claranet, Cloudreach, CSC, Datapipe, KCOM, Rackspace, and Slalom.
  • AWS Consulting Partners – Attenda, Contino, Deloitte, KPMG, LayerV, Lemongrass, Perfect Image, and Version 1.
  • AWS Technology Partners – Splunk, Sage, Sophos, Trend Micro, and Zerolight.
  • AWS Managed Service Partners – Claranet, Cloudreach, KCOM, and Rackspace.
  • AWS Direct Connect Partners – AT&T, BT, Hutchison Global Communications, Level 3, Redcentric, and Vodafone.

Here are a few examples of what our partners are working on:

KCOM is a professional services provider offering consultancy, architecture, project delivery and managed service capabilities to large UK-based enterprise businesses. The scalability and flexibility of AWS gives them a significant competitive advantage with their enterprise and public sector customers. The new Region will allow KCOM to build innovative solutions for their public sector clients while meeting local regulatory requirements.

Splunk is a member of the AWS Partner Network and a market leader in analyzing machine data to deliver operational intelligence for security, IT, and the business. They use cloud computing and big data analytics to help their customers to embrace digital transformation and continuous innovation. The new Region will provide even more companies with real-time visibility into the operation of their systems and infrastructure.

Redcentric is a NHS Digital-approved N3 Commercial Aggregator. Their work allows health and care providers such as NHS acute, emergency and mental trusts, clinical commissioning groups (CCGs), and the ISV community to connect securely to AWS. The London Region will allow health and care providers to deliver new digital services and to improve outcomes for citizens and patients.

Visit the AWS Partner Network page to read some case studies and to learn how to join.

Compliance & Connectivity
Every AWS Region is designed and built to meet rigorous compliance standards including ISO 27001, ISO 9001, ISO 27017, ISO 27018, SOC 1, SOC 2, SOC3, PCI DSS Level 1, and many more. Our Cloud Compliance page includes information about these standards, along with those that are specific to the UK, including Cyber Essentials Plus.

The UK Government recognizes that local datacenters from hyper scale public cloud providers can deliver secure solutions for OFFICIAL workloads. In order to meet the special security needs of public sector organizations in the UK with respect to OFFICIAL workloads, we have worked with our Direct Connect Partners to make sure that obligations for connectivity to the Public Services Network (PSN) and N3 can be met.

Use it Today
The London Region is open for business now and you can start using it today! If you need additional information about this Region, please feel free to contact our UK team at [email protected].

Jeff;

Now Open AWS Canada (Central) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-canada-central-region/

We are growing the AWS footprint once again. Our new Canada (Central) Region is now available and you can start using it today. AWS customers in Canada and the northern parts of the United States have fast, low-latency access to the suite of AWS infrastructure services.

The Details
The new Canada (Central) Region supports Amazon Elastic Compute Cloud (EC2) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances, and Dedicated Hosts.

It also supports Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, AWS CloudTrail, Amazon CloudWatch, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Amazon ECS, EC2 Container Registry, AWS Elastic Beanstalk, Amazon EMR, Amazon ElastiCache, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Marketplace, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, AWS Shield Standard, Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Workflow Service (SWF), AWS Storage Gateway, AWS Trusted Advisor, VM Import/Export, and AWS WAF.

The Region supports all sizes of C4, D2, M4, T2, and X1 instances.

As part of our on-going focus on making cloud computing available to you in an environmentally friendly fashion, AWS data centers in Canada draw power from a grid that generates 99% of its electricity using hydropower (read about AWS Sustainability to learn more).

Well Connected
After receiving a lot of positive feedback on the network latency metrics that I shared when we launched the AWS Region in Ohio, I am happy to have a new set to share as part of today’s launch (these times represent a lower bound on latency and may change over time).

The first set of metrics are to other Canadian cities:

  • 9 ms to Toronto.
  • 14 ms to Ottawa.
  • 47 ms to Calgary.
  • 49 ms to Edmonton.
  • 60 ms to Vancouver.

The second set are to locations in the US:

  • 9 ms to New York.
  • 19 ms to Chicago.
  • 16 ms to US East (Northern Virginia).
  • 27 ms to US East (Ohio).
  • 75 ms to US West (Oregon).

Canada is also home to CloudFront edge locations in Toronto, Ontario, and Montreal, Quebec.

And Canada Makes 15
Today’s launch brings our global footprint to 15 Regions and 40 Availability Zones, with seven more Availability Zones and three more Regions coming online through the next year. As a reminder, each Region is a physical location where we have two or more Availability Zones or AZs. Each Availability Zone, in turn, consists of one or more data centers, each with redundant power, networking, and connectivity, all housed in separate facilities. Having two or more AZ’s in each Region gives you the ability to run applications that are more highly available, fault tolerant, and durable than would be the case if you were limited to a single AZ.

For more information about current and future AWS Regions, take a look at the AWS Global Infrastructure page.

Jeff;


Région AWS Canada (Centre) Maintenant Ouverte

Nous étendons la portée d’AWS une fois de plus. Notre nouvelle Région du Canada (Centre) est maintenant disponible et vous pouvez commencer à l’utiliser dès aujourd’hui. Les clients d’AWS au Canada et dans les régions du nord des États-Unis ont un accès rapide et à latence réduite à l’ensemble des services d’infrastructure AWS.

Les détails
La nouvelle Région du Canada (Centre) supporte Amazon Elastic Compute Cloud (EC2) et les services associés incluant Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances et Dedicated Hosts.

Également supportés sont Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, AWS CloudTrail, Amazon CloudWatch, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Amazon ECS, EC2 Container Registry, AWS Elastic Beanstalk, Amazon EMR, Amazon ElastiCache, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Marketplace, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, AWS Shield Standard, Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Workflow Service (SWF), AWS Storage Gateway, AWS Trusted Advisor, VM Import/Export, et AWS WAF.

La région supporte toutes les tailles des instances C4, D2, M4, T2 et X1.

Dans le cadre de notre mission continue de vous offrir des services infonuagiques de manière écologique, les centres de données d’AWS au Canada sont alimentés par un réseau électrique dont 99 pour cent de l’énergie fournie est de nature hydroélectrique (consultez AWS Sustainability pour en savoir plus).

Bien connecté
Après avoir reçu beaucoup de commentaires positifs sur les mesures de latence du réseau dont je vous ai fait part lorsque nous avons lancé la région AWS en Ohio, je suis heureux de vous faire part d’un nouvel ensemble de mesures dans le cadre du lancement d’aujourd’hui (ces mesures représentent une limite inférieure à la latence et pourraient changer au fil du temps).

Le premier ensemble de mesures concerne d’autres villes canadiennes:

  • 9 ms à Toronto.
  • 14 ms à Ottawa.
  • 47 ms à Calgary.
  • 49 ms à Edmonton.
  • 60 ms à Vancouver.

Le deuxième ensemble concerne des emplacements aux États-Unis :

  • 9 ms à New York.
  • 19 ms à Chicago.
  • 16 ms à USA Est (Virginie du Nord).
  • 27 ms à USA Est (Ohio).
  • 75 ms à USA Ouest (Oregon).

Le Canada compte également des emplacements périphériques CloudFront à Toronto, en Ontario, et à Montréal, au Québec.

Et le Canada fait 15
Le lancement d’aujourd’hui porte notre présence mondiale à 15 régions et 40 zones de disponibilité avec sept autres zones de disponibilité et trois autres régions qui seront mises en opération au cours de la prochaine année. Pour vous rafraîchir la mémoire, chaque région est un emplacement physique où nous avons deux ou plusieurs zones de disponibilité. Chaque zone de disponibilité, à son tour, comprend un ou plusieurs centres de données, chacun doté d’une alimentation, d’une mise en réseau et d’une connectivité redondantes dans des installations distinctes. Avoir deux zones de disponibilité ou plus dans chaque région vous donne la possibilité d’opérer des applications qui sont plus disponibles, plus tolérantes aux pannes et plus durables qu’elles ne le seraient si vous étiez limité à une seule zone de disponibilité.

Pour plus d’informations sur les régions AWS actuelles et futures, consultez la page Infrastructure mondiale AWS.

Jeff;

In Case You Missed These: AWS Security Blog Posts from September and October

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/in-case-you-missed-these-aws-security-blog-posts-from-september-and-october/

In case you missed any AWS Security Blog posts from September and October, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from enabling multi-factor authentication on your AWS API calls to using Amazon CloudWatch Events to monitor application health.

October

October 30: Register for and Attend This November 10 Webinar—Introduction to Three AWS Security Services
As part of the AWS Webinar Series, AWS will present Introduction to Three AWS Security Services on Thursday, November 10. This webinar will start at 10:30 A.M. and end at 11:30 A.M. Pacific Time. AWS Solutions Architect Pierre Liddle shows how AWS Identity and Access Management (IAM), AWS Config Rules, and AWS Cloud Trail can help you maintain control of your environment. In a live demo, Pierre shows you how to track changes, monitor compliance, and keep an audit record of API requests.

October 26: How to Enable MFA Protection on Your AWS API Calls
Multi-factor authentication (MFA) provides an additional layer of security for sensitive API calls, such as terminating Amazon EC2 instances or deleting important objects stored in an Amazon S3 bucket. In some cases, you may want to require users to authenticate with an MFA code before performing specific API requests, and by using AWS Identity and Access Management (IAM) policies, you can specify which API actions a user is allowed to access. In this blog post, I show how to enable an MFA device for an IAM user and author IAM policies that require MFA to perform certain API actions such as EC2’s TerminateInstances.

October 19: Reserved Seating Now Open for AWS re:Invent 2016 Sessions
Reserved seating is new to re:Invent this year and is now open! Some important things you should know about reserved seating:

  1. All sessions have a predetermined number of seats available and must be reserved ahead of time.
  2. If a session is full, you can join a waitlist.
  3. Waitlisted attendees will receive a seat in the order in which they were added to the waitlist and will be notified via email if and when a seat is reserved.
  4. Only one session can be reserved for any given time slot (in other words, you cannot double-book a time slot on your re:Invent calendar).
  5. Don’t be late! The minute the session begins, if you have not badged in, attendees waiting in line at the door might receive your seat.
  6. Waitlisting will not be supported onsite and will be turned off 7-14 days before the beginning of the conference.

October 17: How to Help Achieve Mobile App Transport Security (ATS) Compliance by Using Amazon CloudFront and AWS Certificate Manager
Web and application users and organizations have expressed a growing desire to conduct most of their HTTP communication securely by using HTTPS. At its 2016 Worldwide Developers Conference, Apple announced that starting in January 2017, apps submitted to its App Store will be required to support App Transport Security (ATS). ATS requires all connections to web services to use HTTPS and TLS version 1.2. In addition, Google has announced that starting in January 2017, new versions of its Chrome web browser will mark HTTP websites as being “not secure.” In this post, I show how you can generate Secure Sockets Layer (SSL) or Transport Layer Security (TLS) certificates by using AWS Certificate Manager (ACM), apply the certificates to your Amazon CloudFront distributions, and deliver your websites and APIs over HTTPS.

October 5: Meet AWS Security Team Members at Grace Hopper 2016
For those of you joining this year’s Grace Hopper Celebration of Women in Computing in Houston, you may already know the conference will have a number of security-specific sessions. A group of women from AWS Security will be at the conference, and we would love to meet you to talk about your cloud security and compliance questions. Are you a student, an IT security veteran, or an experienced techie looking to move into security? Make sure to find us to talk about career opportunities.

September

September 29: How to Create a Custom AMI with Encrypted Amazon EBS Snapshots and Share It with Other Accounts and Regions
An Amazon Machine Image (AMI) provides the information required to launch an instance (a virtual server) in your AWS environment. You can launch an instance from a public AMI, customize the instance to meet your security and business needs, and save configurations as a custom AMI. With the recent release of the ability to copy encrypted Amazon Elastic Block Store (Amazon EBS) snapshots between accounts, you now can create AMIs with encrypted snapshots by using AWS Key Management Service (KMS) and make your AMIs available to users across accounts and regions. This allows you to create your AMIs with required hardening and configurations, launch consistent instances globally based on the custom AMI, and increase performance and availability by distributing your workload while meeting your security and compliance requirements to protect your data.

September 19: 32 Security and Compliance Sessions Now Live in the re:Invent 2016 Session Catalog
AWS re:Invent 2016 begins November 28, and now, the live session catalog includes 32 security and compliance sessions. 19 of these sessions are in the Security & Compliance track and 13 are in the re:Source Mini Con for Security Services. All 32se titles and abstracts are included below.

September 8: Automated Reasoning and Amazon s2n
In June 2015, AWS Chief Information Security Officer Stephen Schmidt introduced AWS’s new Open Source implementation of the SSL/TLS network encryption protocols, Amazon s2n. s2n is a library that has been designed to be small and fast, with the goal of providing you with network encryption that is more easily understood and fully auditable. In the 14 months since that announcement, development on s2n has continued, and we have merged more than 100 pull requests from 15 contributors on GitHub. Those active contributors include members of the Amazon S3, Amazon CloudFront, Elastic Load Balancing, AWS Cryptography Engineering, Kernel and OS, and Automated Reasoning teams, as well as 8 external, non-Amazon Open Source contributors.

September 6: IAM Service Last Accessed Data Now Available for the Asia Pacific (Mumbai) Region
In December, AWS Identity and Access Management (IAM) released service last accessed data, which helps you identify overly permissive policies attached to an IAM entity (a user, group, or role). Today, we have extended service last accessed data to support the recently launched Asia Pacific (Mumbai) Region. With this release, you can now view the date when an IAM entity last accessed an AWS service in this region. You can use this information to identify unnecessary permissions and update policies to remove access to unused services.

If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig

AWS Week in Review – October 17, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-october-17-2016/

Wow, a lot is happening in AWS-land these days! Today’s post included submissions from several dozen internal and external contributors, along with material from my RSS feeds, my inbox, and other things that come my way. To join in the fun, create (or find) some awesome AWS-related content and submit a pull request!

Monday

October 17

Tuesday

October 18

Wednesday

October 19

Thursday

October 20

Friday

October 21

Saturday

October 22

Sunday

October 23

New & Notable Open Source

New SlideShare Presentations

Upcoming Events

New AWS Marketplace Listings

  • Application Development
    • Joomia 3.6.0 + Apache + MySQL + AMAZONLINUX AMI by MIRI Infotech Inc, sold by Miri Infotech.
    • LAMP 5 MariaDB and LAMP 7 MariaDB, sold by Jetware.
    • Secured Acquia Drupal on Windows 2008 R2, sold by Cognosys Inc.
    • Secured BugNet on Windows 2008 R2, sold by Cognosys Inc.
    • Secured CMS Gallery on Windows 2008 R2, sold by Cognosys Inc.
    • Secured Kooboo CMS on Windows 2008 R2, sold by Cognosys Inc.
    • Secured Lemoon on Windows 2008 R2, sold by Cognosys Inc.
    • Secured Magento on Windows 2008 R2, sold by Cognosys Inc.
    • Secured MyCV on Windows 2012 R2<, sold by Cognosys Inc.
    • Secured nService on Windows 2012 R2, sold by Cognosys Inc.
    • Secured Orchard CMS on Windows 2008 R2, sold by Cognosys Inc.
  • Application Servers
    • Microsoft Dynamics NAV 2016 for Business, sold by Data Resolution.
    • Microsoft Dynamics GP 2015 for Business, sold by Data Resolution.
    • Microsoft Dynamics AX 2012 for Business, sold by Data Resolution.
    • Microsoft Dynamics SL 2015 for Business, sold by Data Resolution.
    • Redis 3.0, sold by Jetware.
  • Application Stacks
    • LAMP 5 Percona and LAMP 7 Percona, sold by Jetware.
    • MySQL 5.1, MySQL 5.6, and MySQL 5.7, sold by Jetware.
    • Percona Server for MySQL 5.7, sold by Jetware.
    • Perfect7 LAMP v1.1 Multi-PHP w/Security (HVM), sold by Archisoft.
    • Perfect7 LAMP v1.1 Multi-PHP Base (HVM), sold by Archisoft.
  • Content Management
    • DNN Platform 9 Sandbox – SQL 2016, IIS 8.5, .Net 4.6, W2K12R2, sold by Benjamin Hermann.
    • iBase 7, sold by iBase.
    • MediaWiki powered by Symetricore (Plus Edition), sold by Symetricore.
    • Secured CompositeC1 on Windows 2008 R2, sold by Cognosys Inc.
    • Secured Dot Net CMS on Windows 2008 R2, sold by Cognosys Inc.
    • Secured Gallery Server on Windows 2008 R2,sold by Cognosys Inc.
    • Secured Joomia on Windows 2008 R2, sold by Cognosys Inc.
    • Secured Mayando on Windows 2008 R2, sold by Cognosys Inc.
    • Secured phpBB on Windows 2008 R2, sold by Cognosys Inc.
    • Secured Wiki Asp.net on Windows 2008 R2, sold by Cognosys Inc.
    • SharePoint 2016 Enterprise bYOL with paid support, sold by Data Resolution.
    • WordPress Powered by AMIMOTO (Auto-Scaling ready), sold by DigitalCube Co. Ltd.
  • Databases
    • MariaDB 5.5, 10.0, and 10.1, sold by Jetware.
    • Redis 3.2, sold by Jetware
  • Databases
    • MariaDB 5.5, 10.0, and 10.1, sold by Jetware.
    • Redis 3.2, sold by Jetware.
  • eCommerce
    • Secured AspxCommerce on Windows 2008 R2, sold by Cognosys Inc.
    • Secured BeYourMarket on Windows 2008 R2, sold by Cognosys Inc.
    • Secured DashComerce on Windows 2008 R2, sold by Cognosys Inc.
    • Vikrio, sold by Vikrio.
  • Issue & Bug Tracking
    • Redmine 2.6 and Redmine 3.3, sold by Jetware.
  • Monitoring
    • Memcached 1.4, sold by Jetware
  • Network Infrastructure
    • 500 Mbps Load Balancer with Commercial WAF Subscription, sold by KEMP Technologies.
  • Operating System
    • Ubuntu Desktop 16.04 LTS (HVM), sold by Netspectrum Inc.
  • Security
    • AlienVault USM (Unified Security Management) Anywhere, sold by AlienVault.
    • Armor Anywhere CORE, sold by Armor Defense.
    • Hillstone CloudEdge Virtual-Firewall Advanced Edition, sold by Hillstone Networks.
    • Negative SEO Monitoring, sold by SEO Defend.

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Now Open – AWS US East (Ohio) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-us-east-ohio-region/

As part of our ongoing plan to expand the AWS footprint, I am happy to announce that our new US East (Ohio) Region is now available. In conjunction with the existing US East (Northern Virginia) Region, AWS customers in the Eastern part of the United States have fast, low-latency access to the suite of AWS infrastructure services.

The Details
The new Ohio Region supports Amazon Elastic Compute Cloud (EC2) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances, and Dedicated Hosts.

It also supports (deep breath) Amazon API Gateway, Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, Amazon CloudWatch (including CloudWatch Events and CloudWatch Logs), AWS CloudTrail, AWS CodeCommit, AWS CodeDeploy, AWS CodePipeline, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, EC2 Container Registy, Amazon ECS, Amazon Elastic File System, Amazon ElastiCache, AWS Elastic Beanstalk, Amazon EMR, Amazon Elasticsearch Service, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Import/Export Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Lambda, AWS Marketplace, Mobile Hub, AWS OpsWorks, Amazon Relational Database Service (RDS), Amazon Redshift, Amazon Route 53, Amazon Simple Storage Service (S3), AWS Service Catalog, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), AWS Storage Gateway, Amazon Simple Workflow Service (SWF), AWS Trusted Advisor, VM Import/Export, and AWS WAF.

The Region supports all sizes of C4, D2, I2, M4, R3, T2, and X1 instances. As is the case with all of our newer Regions, instances must be launched within a Virtual Private Cloud (read Virtual Private Clouds for Everyone to learn more).

Well Connected
Here are some round-trip network metrics that you may find interesting (all names are airport codes, as is apparently customary in the networking world; all times are +/- 2 ms):

  • 12 ms to IAD (home of the US East (Northern Virginia) Region).
  • 20 ms to JFK (home to an Internet exchange point).
  • 29 ms to ORD (home to a pair of Direct Connect locations hosted by QTS and Equinix and another exchange point).
  • 91 ms to SFO (home of the US West (Northern California) Region).

With just 12 ms of round-trip latency between US East (Ohio) and US East (Northern Virginia), you can make good use of unique AWS features such as S3 Cross-Region Replication, Cross-Region Read Replicas for Amazon Aurora, Cross-Region Read Replicas for MySQL, and Cross-Region Read Replicas for PostgreSQL. Data transfer between the two Regions is priced at the Inter-AZ price ($0.01 per GB), making your cross-region use cases even more economical.

Also on the networking front, we have agreed to work together with Ohio State University to provide AWS Direct Connect access to OARnet. This 100-gigabit network connects colleges, schools, medical research hospitals, and state government across Ohio. This connection provides local teachers, students, and researchers with a dedicated, high-speed network connection to AWS.

14 Regions, 38 Availability Zones, and Counting
Today’s launch of this 3-AZ Region expands our global footprint to a grand total of 14 Regions and 38 Availability Zones. We are also getting ready to open up a second AWS Region in China, along with other new AWS Regions in Canada, France, and the UK.

Since there’s been some industry-wide confusion about the difference between Regions and Availability Zones of late, I think it is important to understand the differences between these two terms. Each Region is a physical location where we have one or more Availability Zones or AZs. Each Availability Zone, in turn, consists of one or more data centers, each with redundant power, networking, and connectivity, all housed in separate facilities. Having two or more AZ’s in each Region gives you the ability to run applications that are more highly available, fault tolerant, and durable than would be the case if you were limited to a single AZ.

Around the office, we sometimes play with analogies that can serve to explain the difference between the two terms. My favorites are “Hotels vs. hotel rooms” and “Apple trees vs. apples.” So, pick your analogy, but be sure that you know what it means!


Jeff;

 

How to Help Achieve Mobile App Transport Security (ATS) Compliance by Using Amazon CloudFront and AWS Certificate Manager

Post Syndicated from Lee Atkinson original https://aws.amazon.com/blogs/security/how-to-help-achieve-mobile-app-transport-security-compliance-by-using-amazon-cloudfront-and-aws-certificate-manager/

Web and application users and organizations have expressed a growing desire to conduct most of their HTTP communication securely by using HTTPS. At its 2016 Worldwide Developers Conference, Apple announced that starting in January 2017, apps submitted to its App Store will be required to support App Transport Security (ATS). ATS requires all connections to web services to use HTTPS and TLS version 1.2. In addition, Google has announced that starting in January 2017, new versions of its Chrome web browser will mark HTTP websites as being “not secure.”

In this post, I show how you can generate Secure Sockets Layer (SSL) or Transport Layer Security (TLS) certificates by using AWS Certificate Manager (ACM), apply the certificates to your Amazon CloudFront distributions, and deliver your websites and APIs over HTTPS.

Background

Hypertext Transfer Protocol (HTTP) was proposed originally without the need for security measures such as server authentication and transport encryption. As HTTP evolved from covering simple document retrieval to sophisticated web applications and APIs, security concerns emerged. For example, if someone were able to spoof a website’s DNS name (perhaps by altering the DNS resolver’s configuration), they could direct users to another web server. Users would be unaware of this because the URL displayed by the browser would appear just as the user expected. If someone were able to gain access to network traffic between a client and server, that individual could eavesdrop on HTTP communication and either read or modify the content, without the client or server being aware of such malicious activities.

Hypertext Transfer Protocol Secure (HTTPS) was introduced as a secure version of HTTP. It uses either SSL or TLS protocols to create a secure channel through which HTTP communication can be transported. Using SSL/TLS, servers can be authenticated by using digital certificates. These certificates can be digitally signed by one of the certificate authorities (CA) trusted by the web client. Certificates can mitigate website spoofing and these can be later revoked by the CA, providing additional security. These revoked certificates are published by the authority on a certificate revocation list, or their status is made available via an online certificate status protocol (OCSP) responder. The SSL/TLS “handshake” that initiates the secure channel exchanges encryption keys in order to encrypt the data sent over it.

To avoid warnings from client applications regarding untrusted certificates, a CA that is trusted by the application must sign the certificates. The process of obtaining a certificate from a CA begins with generating a key pair and a certificate signing request. The certificate authority uses various methods in order to verify that the certificate requester is the owner of the domain for which the certificate is requested. Many authorities charge for verification and generation of the certificate.

Use ACM and CloudFront to deliver HTTPS websites and APIs

The process of requesting and paying for certificates, storing and transporting them securely, and repeating the process at renewal time can be a burden for website owners. ACM enables you to easily provision, manage, and deploy SSL/TLS certificates for use with AWS services, including CloudFront. ACM removes the time-consuming manual process of purchasing, uploading, and renewing certificates. With ACM, you can quickly request a certificate, deploy it on your CloudFront distributions, and let ACM handle certificate renewals. In addition to requesting SSL/TLS certificates provided by ACM, you can import certificates that you obtained outside of AWS.

CloudFront is a global content delivery network (CDN) service that accelerates the delivery of your websites, APIs, video content, and other web assets. CloudFront’s proportion of traffic delivered via HTTPS continues to increase as more customers use the secure protocol to deliver their websites and APIs.

CloudFront supports Apple’s ATS requirements for TLS 1.2, Perfect Forward Secrecy, server certificates with 2048-bit Rivest-Shamir-Adleman (RSA) keys, and a choice of ciphers. See more details in Supported Protocols and Ciphers.

The following diagram illustrates an architecture with ACM, a CloudFront distribution and its origins, and how they integrate to provide HTTPS access to end users and applications.

Solution architecture diagram

  1. ACM automates the creation and renewal of SSL/TLS certificates and deploys them to AWS resources such as CloudFront distributions and Elastic Load Balancing load balancers at your instruction.
  2. Users communicate with CloudFront over HTTPS. CloudFront terminates the SSL/TLS connection at the edge location.
  3. You can configure CloudFront to communicate to the origin over HTTP or HTTPS.

CloudFront enables easy HTTPS adoption. It provides a default *.cloudfront.net wildcard certificate and supports custom certificates, which can be either created by a third-party CA, or created and managed by ACM. ACM automates the process of generating and associating certificates with your CloudFront distribution for the first time and on each renewal. CloudFront supports the Server Name Indication (SNI) TLS extension (enabling efficient use of IP addresses when hosting multiple HTTPS websites) and dedicated-IP SSL/TLS (for older browsers and legacy clients that do no support SNI).

Keeping that background information in mind, I will now show you how you can generate a certificate with ACM and associate it with your CloudFront distribution.

Generate a certificate with ACM and associate it with your CloudFront distribution

In order to help deliver websites and APIs that are compliant with Apple’s ATS requirements, you can generate a certificate in ACM and associate it with your CloudFront distribution.

To generate a certificate with ACM and associate it with your CloudFront distribution:

  1. Go to the ACM console and click Get started.
    ACM "Get started" page
  2. On the next page, type the website’s domain name for your certificate. If applicable, you can enter multiple domains here so that the same certificate can be used for multiple websites. In my case, I type *.leeatk.com to create what is known as a wildcard certificate that can be used for any domain ending in .leeatk.com (that is a domain I own). Click Review and request.
    Request a certificate page
  3. Click Confirm and request. You must now validate that you own the domain. ACM sends an email with a verification link to the domain registrant, technical contact, and administrative contact registered in the Whois record for the domain. ACM also sends the verification link to email addresses commonly associated with an administrator of a domain: administrator, hostmaster, postmaster, and webmaster. ACM sends the same verification email to all these addresses in the expectation that at least one address is monitored by the domain owner. The link in any of the emails can be used to verify the domain.
    List of email addresses to which the email with verification link will be sent
  4. Until the certificate has been validated, the status of the certificate remains Pending validation. When I went through this approval process for *.leeatk.com, I received the verification email shown in the following screenshot. When you receive the verification email, click the link in the email to approve the request.
    Example verification email
  5. After you click I Approve on the landing page, you will then see a page that confirms that you have approved an SSL/TLS certificate for your domain name.
    SSL/TLS certificate confirmation page
  6. Return to the ACM console, and the certificate’s status should become Issued. You may need to refresh the webpage.
    ACM console showing the certificate has been issued
  7. Now that you have created your certificate, go to the CloudFront console and select the distribution with which you want to associate the certificate.
    Screenshot of associating the CloudFront distribution with which to associate the certificate
  8. Click Edit. Scroll down to SSL Certificate and select Custom SSL certificate. From the drop-down list, select the certificate provided by ACM. Select Only Clients that Support Server Name Indication (SNI). You could select All Clients if you want to support older clients that do not support SNI.
    Screenshot of choosing a custom SSL certificate
  9. Save the configuration by clicking Yes, Edit at the bottom of the page.
  10. Now, when you view the website in a browser (Firefox is shown in the following screenshot), you see a green padlock in the address bar, confirming that this page is secured with a certificate trusted by the browser.
    Screenshot showing green padlock in address bar

Configure CloudFront to redirect HTTP requests to HTTPS

We encourage you to use HTTPS to help make your websites and APIs more secure. Therefore, we recommend that you configure CloudFront to redirect HTTP requests to HTTPS.

To configure CloudFront to redirect HTTP requests to HTTPS:

  1. Go to the CloudFront console, select the distribution again, and then click Cache Behavior.
    Screenshot showing Cache Behavior button
  2. In my case, I only have one behavior in my distribution. (If I had more behaviors, I would repeat the process for each behavior that I wanted to have HTTP-to-HTTPS redirection) and click Edit.
  3. Next to Viewer Protocol Policy, choose Redirect HTTP to HTTPS, and click Yes, Edit at the bottom of the page.
    Screenshot of choosing Redirect HTTP to HTTPS

I could also consider employing an HTTP Strict Transport Security (HSTS) policy on my website. In this case, I would add a Strict-Transport-Security response header at my origin to instruct browsers and other applications to make only HTTPS requests to my website for a period of time specified in the header’s value. This ensures that if a user submits a URL to my website specifying only HTTP, the browser will make an HTTPS request anyway. This is also useful for websites that link to my website using HTTP URLs.

Summary

CloudFront and ACM enable more secure communication between your users and your websites. CloudFront allows you to adopt HTTPS for your websites and APIs. ACM provides a simple way to request, manage, and renew your SSL/TLS certificates, and deploy those to AWS services such as CloudFront. Mobile application developers and API providers can more easily meet Apple’s ATS requirements now using CloudFront, in time for the January 2017 deadline.

If you have comments about this post, submit them in the “Comments” section below. If you have implementation questions, please start a new thread on the CloudFront forum.

– Lee

Powerful AWS Platform Features, Now for Containers

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/powerful-aws-platform-features-now-for-containers/

Containers are great but they come with their own management challenges. Our customers have been using containers on AWS for quite some time to run workloads ranging from microservices to batch jobs. They told us that managing a cluster, including the state of the EC2 instances and containers, can be tricky, especially as the environment grows. They also told us that integrating the capabilities you get with the AWS platform, such as load balancing, scaling, security, monitoring, and more, with containers is a key requirement. Amazon ECS was designed to meet all of these needs and more.

We created Amazon ECS  to make it easy for customers to run containerized applications in production. There is no container management software to install and operate because it is all provided to you as a service. You just add the EC2 capacity you need to your cluster and upload your container images. Amazon ECS takes care of the rest, deploying your containers across a cluster of EC2 instances and monitoring their health. Customers such as Expedia and Remind have built Amazon ECS into their development workflow, creating PaaS platforms on top of it. Others, such as Prezi and Shippable, are leveraging ECS to eliminate operational complexities of running containers, allowing them to spend more time delivering features for their apps.

AWS has highly reliable and scalable fully-managed services for load balancing, auto scaling, identity and access management, logging, and monitoring. Over the past year, we have continued to natively integrate the capabilities of the AWS platform with your containers through ECS, giving you the same capabilities you are used to on EC2 instances.

Amazon ECS recently delivered container support for application load balancing (Today), IAM roles (July), and Auto Scaling (May). We look forward to bringing more of the AWS platform to containers over time.

Let’s take a look at the new capabilities!

Application Load Balancing
Load balancing and service discovery are essential parts of any microservices architecture. Because Amazon ECS uses Elastic Load Balancing, you don’t need to manage and scale your own load balancing layer. You also get direct access to other AWS services that support ELB such as AWS Certificate Manager (ACM) to automatically manage your service’s certificates and Amazon API Gateway to authenticate callers, among other features.

Today, I am happy to announce that ECS supports the new application load balancer, a high-performance load balancing option that operates at the application layer and allows you to define content-based routing rules. The application load balancer includes two features that simplify running microservices on ECS: dynamic ports and the ability for multiple services to share a single load balancer.

Dynamic ports makes it easier to start tasks in your cluster without having to worry about port conflicts. Previously, to use Elastic Load Balancing to route traffic to your applications, you had to define a fixed host port in the ECS task. This added operational complexity, as you had to track the ports each application used, and it reduced cluster efficiency, as only one task could be placed per instance. Now, you can specify a dynamic port in the ECS task definition, which gives the container an unused port when it is scheduled on the EC2 instance. The ECS scheduler automatically adds the task to the application load balancer’s target group using this port. To get started, you can create an application load balancer from the EC2 Console or using the AWS Command Line Interface (CLI). Create a task definition in the ECS console with a container that sets the host port to 0. This container automatically receives a port in the ephemeral port range when it is scheduled.

Previously, there was a one-to-one mapping between ECS services and load balancers. Now, a load balancer can be shared with multiple services, using path-based routing. Each service can define its own URI, which can be used to route traffic to that service. In addition, you can create an environment variable with the service’s DNS name, supporting basic service discovery. For example, a stock service could be http://example.com/stock and a weather service could be http://example.com/weather, both served from the same load balancer. A news portal could then use the load balancer to access both the stock and weather services.

IAM Roles for ECS Tasks
In Amazon ECS, you have always been able to use IAM roles for your Amazon EC2 container instances to simplify the process of making API requests from your containers. This also allows you to follow AWS best practices by not storing your AWS credentials in your code or configuration files, as well as providing benefits such as automatic key rotation.

With the introduction of the recently launched IAM roles for ECS tasks, you can secure your infrastructure by assigning an IAM role directly to the ECS task rather than to the EC2 container instance. This way, you can have one task that uses a specific IAM role for access to, let’s say, S3 and another task that uses an IAM role to access a DynamoDB table, both running on the same EC2 instance.

Service Auto Scaling
The third feature I want to highlight is Service Auto Scaling. With Service Auto Scaling and Amazon CloudWatch alarms, you can define scaling policies to scale your ECS services in the same way that you scale your EC2 instances up and down. With Service Auto Scaling, you can achieve high availability by scaling up when demand is high, and optimize costs by scaling down your service and the cluster, when demand is lower, all automatically and in real-time.

You simply choose the desired, minimum and maximum number of tasks, create one or more scaling policies, and Service Auto Scaling handles the rest. The service scheduler is also Availability Zone–aware, so you don’t have to worry about distributing your ECS tasks across multiple zones.

Available Now
These features are available now and you can start using them today!


Jeff;

New – AWS Application Load Balancer

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/

We launched Elastic Load Balancing (ELB) for AWS in the spring of 2009 (see New Features for Amazon EC2: Elastic Load Balancing, Auto Scaling, and Amazon CloudWatch to see just how far AWS has come since then). Elastic Load Balancing has become a key architectural component for many AWS-powered applications. In conjunction with Auto Scaling, Elastic Load Balancing greatly simplifies the task of building applications that scale up and down while maintaining high availability.

On the Level
Per the well-known OSI model, load balancers generally run at Layer 4 (network) or Layer 7 (application).

A Layer 4 load balancer works at the network protocol level and does not look inside of the actual network packets, remaining unaware of the specifics of HTTP and HTTPS. In other words, it balances the load without necessarily knowing a whole lot about it.

A Layer 7 load balancer is more sophisticated and more powerful. It inspects packets, has access to HTTP and HTTPS headers, and (armed with more information) can do a more intelligent job of spreading the load out to the target.

Application Load Balancing for AWS
Today we are launching a new Application Load Balancer option for ELB. This option runs at Layer 7 and supports a number of advanced features. The original option (now called a Classic Load Balancer) is still available to you and continues to offer Layer 4 and Layer 7 functionality.

Application Load Balancers support content-based routing, and supports applications that run in containers. They support a pair of industry-standard protocols (WebSocket and HTTP/2) and also provide additional visibility into the health of the target instances and containers. Web sites and mobile apps, running in containers or on EC2 instances, will benefit from the use of Application Load Balancers.

Let’s take a closer look at each of these features and then create a new Application Load Balancer of our very own!

Content-Based Routing
An Application Load Balancer has access to HTTP headers and allows you to route requests to different backend services accordingly. For example, you might want to send requests that include /api in the URL path to one group of servers (we call these target groups) and requests that include /mobile to another. Routing requests in this fashion allows you to build applications that are composed of multiple microservices that can run and be scaled independently.

As you will see in a moment, each Application Load Balancer allows you to define up to 10 URL-based rules to route requests to target groups. Over time, we plan to give you access to other routing methods.

Support for Container-Based Applications
Many AWS customers are packaging up their microservices into containers and hosting them on Amazon EC2 Container Service. This allows a single EC2 instance to run one or more services, but can present some interesting challenges for traditional load balancing with respect to port mapping and health checks.

The Application Load Balancer understands and supports container-based applications. It allows one instance to host several containers that listen on multiple ports behind the same target group and also performs fine-grained, port-level health checks

Better Metrics
Application Load Balancers can perform and report on health checks on a per-port basis. The health checks can specify a range of acceptable HTTP responses, and are accompanied by detailed error codes.

As a byproduct of the content-based routing, you also have the opportunity to collect metrics on each of your microservices. This is a really nice side-effect that each of the microservices can be running in its own target group, on a specific set of EC2 instances. This increased visibility will allow you to do a better job of scaling up and down in response to the load on individual services.

The Application Load Balancer provides several new CloudWatch metrics including overall traffic (in GB), number of active connections, and the connection rate per hour.

Support for Additional Protocols & Workloads
The Application Load Balancer supports two additional protocols: WebSocket and HTTP/2 (formerly known as SPDY).

WebSocket allows you to set up long-standing TCP connections between your client and your server. This is a more efficient alternative to the old-school method which involved HTTP connections that were held open with a “heartbeat” for very long periods of time. WebSocket is great for mobile devices and can be used to deliver stock quotes, sports scores, and other dynamic data while minimizing power consumption. ALB provides native support for WebSocket via the ws:// and wss:// protocols.

HTTP/2 is a significant enhancement of the original HTTP 1.1 protocol. The newer protocol feature supports multiplexed requests across a single connection. This reduces network traffic, as does the binary nature of the protocol.

The Application Load Balancer is designed to handle streaming, real-time, and WebSocket workloads in an optimized fashion. Instead of buffering requests and responses, it handles them in streaming fashion. This reduces latency and increases the perceived performance of your application.

Creating an ALB
Let’s create an Application Load Balancer and get it all set up to process some traffic!

The Elastic Load Balancing Console lets me create either type of load balancer:

I click on Application load balancer, enter a name (MyALB), and choose internet-facing. Then I add an HTTPS listener:

On the same screen, I choose my VPC (this is a VPC-only feature) and one subnet in each desired Availability Zone, tag my Application Load Balancer, and proceed to Configure Security Settings:

Because I created an HTTPS listener, my Application Load Balancer needs a certificate. I can choose an existing certificate that’s already in IAM or AWS Certificate Manager (ACM),  upload a local certificate, or request a new one:

Moving right along, I set up my security group. In this case I decided to create a new one. I could have used one of my existing VPC or EC2 security groups just as easily:

The next step is to create my first target group (main) and to set up its health checks (I’ll take the defaults):

Now I am ready to choose the targets—the set of EC2 instances that will receive traffic through my Application Load Balancer. Here, I chose the targets that are listening on port 80:

The final step is to review my choices and to Create my ALB:

After I click on Create the Application Load Balancer is provisioned and becomes active within a minute or so:

I can create additional target groups:

And then I can add a new rule that routes /api requests to that target:

Application Load Balancers work with multiple AWS services including Auto Scaling, Amazon ECS, AWS CloudFormation, AWS CodeDeploy, and AWS Certificate Manager (ACM).

Moving on Up
If you are currently using a Classic Load Balancer and would like to migrate to an Application Load Balancer, take a look at our new Load Balancer Copy Utility. This Python tool will help you to create an Application Load Balancer with the same configuration as an existing Classic Load Balancer. It can also register your existing EC2 instances with the new load balancer.

Availability & Pricing
The Application Load Balancer is available now in all commercial AWS regions and you can start using it today!

The hourly rate for the use of an Application Load Balancer is 10% lower than the cost of a Classic Load Balancer.

When you use an Application Load Balancer, you will be billed by the hour and for the use of Load Balancer Capacity Units, also known as LCU’s. An LCU measures the number of new connections per second, the number of active connections, and data transfer. We measure on all three dimensions, but bill based on the highest one. One LCU is enough to support either:

  • 25 connections/second with a 2 KB certificate, 3,000 active connections, and 2.22 Mbps of data transfer or
  • 5 connections/second with a 4 KB certificate, 3,000 active connective, and 2.22 Mbps of data transfer.

Billing for LCU usage is fractional, and is charged at $0.008 per LCU per hour. Based on our calculations, we believe that virtually all of our customers can obtain a net reduction in their load balancer costs by switching from a Classic Load Balancer to an Application Load Balancer.


Jeff;

 

 

 

The Top 20 AWS IAM Documentation Pages so Far This Year

Post Syndicated from Dave Bishop original https://blogs.aws.amazon.com/security/post/Tx2AN1TGT21WOFX/The-Top-20-AWS-IAM-Documentation-Pages-so-Far-This-Year

The following 20 pages have been the most viewed AWS Identity and Access Management (IAM) documentation pages so far this year. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research. 

  1. Creating an IAM User in Your AWS Account
    You can create one or more IAM users in your AWS account. You might create an IAM user when someone joins your organization, or when you have a new application that needs to make API calls to AWS.
  2. What Is IAM?
    IAM is a web service that helps you securely control access to AWS resources for your users. You use IAM to control who can use your AWS resources (authentication) and what resources they can use and in what ways (authorization).
  3. The IAM Console and the Sign-in Page
    This page provides information about the IAM-enabled AWS Management Console sign-in page and explains how to create a unique sign-in URL for your account.
  4. How Users Sign In to Your Account
    After you create IAM users and passwords for each, users can sign in to the AWS Management Console for your AWS account with a special URL.
  5. IAM Best Practices
    To help secure your AWS resources, follow these recommendations for IAM.
  6. IAM Policy Elements Reference
    This page describes the elements that you can use in an IAM policy. The elements are listed here in the general order you use them in a policy.
  7. Managing Access Keys for IAM Users
    Users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. To fill this need, you can create, modify, view, or rotate access keys (access key IDs and secret access keys) for IAM users.
  8. Working with Server Certificates
    Some AWS services can use server certificates that you manage with IAM or AWS Certificate Manager (ACM). In many cases, we recommend that you use ACM to provision, manage, and deploy your SSL/TLS certificates.
  9. Your AWS Account ID and Its Alias
    Learn how to find your AWS account ID number and its alias.
  10. Overview of IAM Policies
    This page provides an overview of IAM policies. A policy is a document that formally states one or more permissions.
  11. Example Policies for Administering AWS Resources
    This page shows some examples of policies that control access to resources in AWS services.
  12. Using Multi-Factor Authentication (MFA) in AWS
    For increased security, we recommend that you configure MFA to help protect your AWS resources. MFA adds extra security because it requires users to enter a unique authentication code from an approved authentication device or SMS text message when they access AWS websites or services.
  13. Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances
    Use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you use a role, you do not have to distribute long-term credentials to an EC2 instance. Instead, the role supplies temporary permissions that applications can use when they make calls to other AWS resources.
  14. Managing Your Server Certificates
    After you obtain or create a server certificate, you upload it to IAM so that other AWS services can use it. You might also need to get certificate information, rename or delete a certificate, or perform other management tasks.
  15. IAM Roles
    An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it.
  16. Creating Your First IAM User and Administrators Group
    This procedure describes how to create an IAM group named Administrators, grant the group full permissions for all AWS services, and then create an administrative IAM user for yourself by adding the user to the Administrators group.
  17. Enabling a Virtual MFA Device
    A virtual MFA device uses a software application to generate a six-digit authentication code that is compatible with the time-based one-time password (TOTP) standard, as described in RFC 6238. The app can run on mobile hardware devices, including smartphones.
  18. Using Instance Profiles
    An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts.
  19. Temporary Security Credentials
    You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use.
  20. Setting an Account Password Policy for IAM Users
    You can set a password policy on your AWS account to specify complexity requirements and mandatory rotation periods for your IAM users’ passwords.

In the "Comments" section below, let us know if you would like to see anything on these or other IAM documentation pages expanded or updated to make it more useful to you. 

– Dave

Now Open – AWS Asia Pacific (Mumbai) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-asia-pacific-mumbai-region/

We are expanding the AWS footprint again, this time with a new region in Mumbai, India. AWS customers in the area can use the new Asia Pacific (Mumbai) Region to better serve end users in India.

New Region
The new Mumbai region has two Availability Zones, raising the global total to 35. It supports Amazon Elastic Compute Cloud (EC2) (C4, M4, T2, D2, I2, and R3 instances are available) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, and  Elastic Load Balancing.

It also supports the following services:

There are now three edge locations (Mumbai, Chennai, and New Delhi) in India. The locations support Amazon Route 53, Amazon CloudFront, and S3 Transfer Acceleration. AWS Direct Connect support is available via our Direct Connect Partners (listed below).

This is our thirteenth region (see the AWS Global Infrastructure map for more information). As usual, you can see the list of regions in the region menu of the Console:

Customers
There are over 75,000 active AWS customers in India, representing a diverse base of industries. In the time leading up to today’s launch, we have provided some of these customers with access to the new region in preview form. Two of them (Ola Cabs and NDTV) were kind enough to share some of their experience and observations with us:

Ola Cabs’ mobile app leverages AWS to redefine point-to-point transportation in more than 100 cities across India. AWS allows OLA to constantly innovate faster with new features and services for their customers, without compromising on availability or the customer experience of their service. Ankit Bhati (CTO and Co-Founder) told us:

We are using technology to create mobility for a billion Indians, by giving them convenience and access to transportation of their choice. Technology is a key enabler, where we use AWS to drive supreme customer experience, and innovate faster on new features & services for our customers. This has helped us reach 100+ cities & 550K driver partners across India. We do petabyte scale analytics using various AWS big data services and deep learning techniques, allowing us to bring our driver-partners close to our customers when they need them. AWS allows us to make 30+ changes a day to our highly scalable micro-services based platform consisting of 100s of low latency APIs, serving millions of requests a day. We have tried the AWS India region. It is great and should help us further enhance the experience for our customers.


NDTV, India’s leading media house is watched by millions of people across the world. NDTV has been using AWS since 2009 to run their video platform and all their web properties. During the Indian general elections in May 2014, NDTV fielded an unprecedented amount of web traffic that scaled 26X from 500 million hits per day to 13 billion hits on Election Day (regularly peaking at 400K hits per second), all running on AWS.  According to Kawaljit Singh Bedi (CTO of NDTV Convergence):

NDTV is pleased to report very promising results in terms of reliability and stability of AWS’ infrastructure in India in our preview tests. Based on tests that our technical teams have run in India, we have determined that the network latency from the AWS India infrastructure Region are far superior compared to other alternatives. Our web and mobile traffic has jumped by over 30% in the last year and as we expand to new territories like eCommerce and platform-integration we are very excited on the new AWS India region launch. With the portfolio of services AWS will offer at launch, low latency, great reliability, and the ability to meet regulatory requirements within India, NDTV has decided to move these critical applications and IT infrastructure all-in to the AWS India region from our current set-up.

 


Here are some of our other customers in the region:

Tata Motors Limited, a leading Indian multinational automotive manufacturing company runs its telematics systems on AWS. Fleet owners use this solution to monitor all vehicles in their fleet on a real time basis. AWS has helped Tata Motors become to more agile and has increased their speed of experimentation and innovation.

redBus is India’s leading bus ticketing platform that sells their tickets via web, mobile, and bus agents. They now cover over 67K routes in India with over 1,800 bus operators. redBus has scaled to sell more than 40 million bus tickets annually, up from just 2 million in 2010. At peak season, there are over 100 bus ticketing transactions every minute. The company also recently developed a new SaaS app on AWS that gives bus operators the option of handling their own ticketing and managing seat inventories. redBus has gone global expanding to new geographic locations such as Singapore and Peru using AWS.

Hotstar is India’s largest premium streaming platform with more than 85K hours of drama and movies and coverage of every major global sporting event. Launched in February 2015, Hotstar quickly became one of the fastest adopted new apps anywhere in the world. It has now been downloaded by more than 68M users and has attracted followers on the back of a highly evolved video streaming technology and high attention to quality of experience across devices and platforms.

Macmillan India has provided publishing services to the education market in India for more than 120 years. Prior to using AWS, Macmillan India has moved its core enterprise applications — Business Intelligence (BI), Sales and Distribution, Materials Management, Financial Accounting and Controlling, Human Resources and a customer relationship management (CRM) system from an existing data center in Chennai to AWS. By moving to AWS, Macmillan India has boosted SAP system availability to almost 100 percent and reduced the time it takes them to provision infrastructure from 6 weeks to 30 minutes.

Partners
We are pleased to be working with a broad selection of partners in India. Here’s a sampling:

  • AWS Premier Consulting Partners – BlazeClan Technologies Pvt. Limited, Minjar Cloud Solutions Pvt Ltd, and Wipro.
  • AWS Consulting Partners – Accenture, BluePi, Cloudcover, Frontier, HCL, Powerupcloud, TCS, and Wipro.
  • AWS Technology Partners – Freshdesk, Druva, Indusface, Leadsquared, Manthan, Mithi, Nucleus Software, Newgen, Ramco Systems, Sanovi, and Vinculum.
  • AWS Managed Service Providers – Progressive Infotech and Spruha Technologies.
  • AWS Direct Connect Partners – AirTel, Colt Technology Services,  Global Cloud Xchange, GPX, Hutchison Global Communications, Sify, and Tata Communications.

Amazon Offices in India
We have opened six offices in India since 2011 – Delhi, Mumbai, Hyderabad, Bengaluru, Pune, and Chennai. These offices support our diverse customer base in India including enterprises, government agencies, academic institutions, small-to-mid-size companies, startups, and developers.

Support
The full range of AWS Support options (Basic, Developer, Business, and Enterprise) is also available for the Mumbai Region. All AWS support plans include an unlimited number of account and billing support cases, with no long-term contracts.

Compliance
Every AWS region is designed and built to meet rigorous compliance standards including ISO 27001, ISO 9001, ISO 27017, ISO 27018, SOC 1, SOC 2, and PCI DSS Level 1 (to name a few). AWS implements an information Security Management System (ISMS) that is independently assessed by qualified third parties. These assessments address a wide variety of requirements which are communicated to customers by making certifications and audit reports available, either on our public-facing website or upon request.

To learn more; take a look at the AWS Cloud Compliance page and our Data Privacy FAQ.

Use it Now
This new region is now open for business and you can start using it today! You can find additional information about the new region, documentation on how to migrate, customer use cases, information on training and other events, and a list of AWS Partners in India on the AWS site.

We have set up a seller of record in India (known as AISPL); please see the AISPL customer agreement for details.


Jeff;