Tag Archives: C5

Now Available – Compute-Intensive C5 Instances for Amazon EC2

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-compute-intensive-c5-instances-for-amazon-ec2/

I’m thrilled to announce that the new compute-intensive C5 instances are available today in six sizes for launch in three AWS regions!

These instances designed for compute-heavy applications like batch processing, distributed analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding. The new instances offer a 25% price/performance improvement over the C4 instances, with over 50% for some workloads. They also have additional memory per vCPU, and (for code that can make use of the new AVX-512 instructions), twice the performance for vector and floating point workloads.

Over the years we have been working non-stop to provide our customers with the best possible networking, storage, and compute performance, with a long-term focus on offloading many types of work to dedicated hardware designed and built by AWS. The C5 instance type incorporates the latest generation of our hardware offloads, and also takes another big step forward with the addition of a new hypervisor that runs hand-in-glove with our hardware. The new hypervisor allows us to give you access to all of the processing power provided by the host hardware, while also making performance even more consistent and further raising the bar on security. We’ll be sharing many technical details about it at AWS re:Invent.

The New Instances
The C5 instances are available in six sizes:

Instance Name vCPUs
RAM
EBS Bandwidth Network Bandwidth
c5.large 2 4 GiB Up to 2.25 Gbps Up to 10 Gbps
c5.xlarge 4 8 GiB Up to 2.25 Gbps Up to 10 Gbps
c5.2xlarge 8 16 GiB Up to 2.25 Gbps Up to 10 Gbps
c5.4xlarge 16 32 GiB 2.25 Gbps Up to 10 Gbps
c5.9xlarge 36 72 GiB 4.5 Gbps 10 Gbps
c5.18xlarge 72 144 GiB 9 Gbps 25 Gbps

Each vCPU is a hardware hyperthread on a 3.0 GHz Intel Xeon Platinum 8000-series processor. This custom processor, optimized for EC2, allows you have full control over the C-states on the two largest sizes, allowing you to run a single core at up to 3.5 GHz using Intel Turbo Boost Technology.

As you can see from the table, the four smallest instance sizes offer substantially more EBS and network bandwidth than the previous generation of compute-intensive instances.

Because all networking and storage functionality is implemented in hardware, C5 instances require HVM AMIs that include drivers for the Elastic Network Adapter (ENA) and NVMe. The latest Amazon Linux, Microsoft Windows, Ubuntu, RHEL, CentOS, SLES, Debian, and FreeBSD AMIs all support C5 instances. If you are doing machine learning inferencing, or other compute-intensive work, be sure to check out the most recent version of the Intel Math Kernel Library. It has been optimized for the Intel® Xeon® Platinum processor and has the potential to greatly accelerate your work.

In order to remain compatible with instances that use the Xen hypervisor, the device names for EBS volumes will continue to use the existing /dev/sd and /dev/xvd prefixes. The device name that you provide when you attach a volume to an instance is not used because the NVMe driver assigns its own device name (read Amazon EBS and NVMe to learn more):

The nvme command displays additional information about each volume (install it using sudo yum -y install nvme-cli if necessary):

The SN field in the output can be mapped to an EBS volume ID by inserting a “-” after the “vol” prefix (sadly, the NVMe SN field is not long enough to store the entire ID). Here’s a simple script that uses this information to create an EBS snapshot of each attached volume:

$ sudo nvme list | \
  awk '/dev/ {print(gensub("vol", "vol-", 1, $2))}' | \
  xargs -n 1 aws ec2 create-snapshot --volume-id

With a little more work (and a lot of testing), you could create a script that expands EBS volumes that are getting full.

Getting to C5
As I mentioned earlier, our effort to offload work to hardware accelerators has been underway for quite some time. Here’s a recap:

CC1 – Launched in 2010, the CC1 was designed to support scale-out HPC applications. It was the first EC2 instance to support 10 Gbps networking and one of the first to support HVM virtualization. The network fabric that we designed for the CC1 (based on our own switch hardware) has become the standard for all AWS data centers.

C3 – Launched in 2013, the C3 introduced Enhanced Networking and uses dedicated hardware accelerators to support the software defined network inside of each Virtual Private Cloud (VPC). Hardware virtualization removes the I/O stack from the hypervisor in favor of direct access by the guest OS, resulting in higher performance and reduced variability.

C4 – Launched in 2015, the C4 instances are EBS Optimized by default via a dedicated network connection, and also offload EBS processing (including CPU-intensive crypto operations for encrypted EBS volumes) to a hardware accelerator.

C5 – Launched today, the hypervisor that powers the C5 instances allow practically all of the resources of the host CPU to be devoted to customer instances. The ENA networking and the NVMe interface to EBS are both powered by hardware accelerators. The instances do not require (or support) the Xen paravirtual networking or block device drivers, both of which have been removed in order to increase efficiency.

Going forward, we’ll use this hypervisor to power other instance types and plan to share additional technical details in a set of AWS re:Invent sessions.

Launch a C5 Today
You can launch C5 instances today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) Regions in On-Demand and Spot form (Reserved Instances are also available), with additional Regions in the works.

One quick note before I go: The current NVMe driver is not optimized for high-performance sequential workloads and we don’t recommend the use of C5 instances in conjunction with sc1 or st1 volumes. We are aware of this issue and have been working to optimize the driver for this important use case.

Jeff;

How to Prepare for AWS’s Move to Its Own Certificate Authority

Post Syndicated from Jonathan Kozolchyk original https://aws.amazon.com/blogs/security/how-to-prepare-for-aws-move-to-its-own-certificate-authority/

AWS Certificate Manager image

Transport Layer Security (TLS, formerly called Secure Sockets Layer [SSL]) is essential for encrypting information that is exchanged on the internet. For example, Amazon.com uses TLS for all traffic on its website, and AWS uses it to secure calls to AWS services.

An electronic document called a certificate verifies the identity of the server when creating such an encrypted connection. The certificate helps establish proof that your web browser is communicating securely with the website that you typed in your browser’s address field. Certificate Authorities, also known as CAs, issue certificates to specific domains. When a domain presents a certificate that is issued by a trusted CA, your browser or application knows it’s safe to make the connection.

In January 2016, AWS launched AWS Certificate Manager (ACM), a service that lets you easily provision, manage, and deploy SSL/TLS certificates for use with AWS services. These certificates are available for no additional charge through Amazon’s own CA: Amazon Trust Services. For browsers and other applications to trust a certificate, the certificate’s issuer must be included in the browser’s trust store, which is a list of trusted CAs. If the issuing CA is not in the trust store, the browser will display an error message (see an example) and applications will show an application-specific error. To ensure the ubiquity of the Amazon Trust Services CA, AWS purchased the Starfield Services CA, a root found in most browsers and which has been valid since 2005. This means you shouldn’t have to take any action to use the certificates issued by Amazon Trust Services.

AWS has been offering free certificates to AWS customers from the Amazon Trust Services CA. Now, AWS is in the process of moving certificates for services such as Amazon EC2 and Amazon DynamoDB to use certificates from Amazon Trust Services as well. Most software doesn’t need to be changed to handle this transition, but there are exceptions. In this blog post, I show you how to verify that you are prepared to use the Amazon Trust Services CA.

How to tell if the Amazon Trust Services CAs are in your trust store

The following table lists the Amazon Trust Services certificates. To verify that these certificates are in your browser’s trust store, click each Test URL in the following table to verify that it works for you. When a Test URL does not work, it displays an error similar to this example.

Distinguished name SHA-256 hash of subject public key information Test URL
CN=Amazon Root CA 1,O=Amazon,C=US fbe3018031f9586bcbf41727e417b7d1c45c2f47f93be372a17b96b50757d5a2 Test URL
CN=Amazon Root CA 2,O=Amazon,C=US 7f4296fc5b6a4e3b35d3c369623e364ab1af381d8fa7121533c9d6c633ea2461 Test URL
CN=Amazon Root CA 3,O=Amazon,C=US 36abc32656acfc645c61b71613c4bf21c787f5cabbee48348d58597803d7abc9 Test URL
CN=Amazon Root CA 4,O=Amazon,C=US f7ecded5c66047d28ed6466b543c40e0743abe81d109254dcf845d4c2c7853c5 Test URL
CN=Starfield Services Root Certificate Authority – G2,O=Starfield Technologies\, Inc.,L=Scottsdale,ST=Arizona,C=US 2b071c59a0a0ae76b0eadb2bad23bad4580b69c3601b630c2eaf0613afa83f92 Test URL
Starfield Class 2 Certification Authority 2ce1cb0bf9d2f9e102993fbe215152c3b2dd0cabde1c68e5319b839154dbb7f5 Test URL

What to do if the Amazon Trust Services CAs are not in your trust store

If your tests of any of the Test URLs failed, you must update your trust store. The easiest way to update your trust store is to upgrade the operating system or browser that you are using.

You will find the Amazon Trust Services CAs in the following operating systems (release dates are in parentheses):

  • Microsoft Windows versions that have January 2005 or later updates installed, Windows Vista, Windows 7, Windows Server 2008, and newer versions
  • Mac OS X 10.4 with Java for Mac OS X 10.4 Release 5, Mac OS X 10.5 and newer versions
  • Red Hat Enterprise Linux 5 (March 2007), Linux 6, and Linux 7 and CentOS 5, CentOS 6, and CentOS 7
  • Ubuntu 8.10
  • Debian 5.0
  • Amazon Linux (all versions)
  • Java 1.4.2_12, Jave 5 update 2, and all newer versions, including Java 6, Java 7, and Java 8

All modern browsers trust Amazon’s CAs. You can update the certificate bundle in your browser simply by updating your browser. You can find instructions for updating the following browsers on their respective websites:

If your application is using a custom trust store, you must add the Amazon root CAs to your application’s trust store. The instructions for doing this vary based on the application or platform. Please refer to the documentation for the application or platform you are using.

AWS SDKs and CLIs

Most AWS SDKs and CLIs are not impacted by the transition to the Amazon Trust Services CA. If you are using a version of the Python AWS SDK or CLI released before February 5, 2015, you must upgrade. The .NET, Java, PHP, Go, JavaScript, and C++ SDKs and CLIs do not bundle any certificates, so their certificates come from the underlying operating system. The Ruby SDK has included at least one of the required CAs since June 10, 2015. Before that date, the Ruby V2 SDK did not bundle certificates.

Certificate pinning

If you are using a technique called certificate pinning to lock down the CAs you trust on a domain-by-domain basis, you must adjust your pinning to include the Amazon Trust Services CAs. Certificate pinning helps defend you from an attacker using misissued certificates to fool an application into creating a connection to a spoofed host (an illegitimate host masquerading as a legitimate host). The restriction to a specific, pinned certificate is made by checking that the certificate issued is the expected certificate. This is done by checking that the hash of the certificate public key received from the server matches the expected hash stored in the application. If the hashes do not match, the code stops the connection.

AWS recommends against using certificate pinning because it introduces a potential availability risk. If the certificate to which you pin is replaced, your application will fail to connect. If your use case requires pinning, we recommend that you pin to a CA rather than to an individual certificate. If you are pinning to an Amazon Trust Services CA, you should pin to all CAs shown in the table earlier in this post.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about this post, start a new thread on the ACM forum.

– Jonathan

Automating Security Group Updates with AWS Lambda

Post Syndicated from Ian Scofield original https://aws.amazon.com/blogs/compute/automating-security-group-updates-with-aws-lambda/

Customers often use public endpoints to perform cross-region replication or other application layer communication to remote regions. But a common problem is how do you protect these endpoints? It can be tempting to open up the security groups to the world due to the complexity of keeping security groups in sync across regions with a dynamically changing infrastructure.

Consider a situation where you are running large clusters of instances in different regions that all require internode connectivity. One approach would be to use a VPN tunnel between regions to provide a secure tunnel over which to send your traffic. A good example of this is the Transit VPC Solution, which is a published AWS solution to help customers quickly get up and running. However, this adds additional cost and complexity to your solution due to the newly required additional infrastructure.

Another approach, which I’ll explore in this post, is to restrict access to the nodes by whitelisting the public IP addresses of your hosts in the opposite region. Today, I’ll outline a solution that allows for cross-region security group updates, can handle remote region failures, and supports external actions such as manually terminating instances or adding instances to an existing Auto Scaling group.

Solution overview

The overview of this solution is diagrammed below. Although this post covers limiting access to your instances, you should still implement encryption to protect your data in transit.

If your entire infrastructure is running in a single region, you can reference a security group as the source, allowing your IP addresses to change without any updates required. However, if you’re going across the public internet between regions to perform things like application-level traffic or cross-region replication, this is no longer an option. Security groups are regional. When you go across regions it can be tempting to drop security to enable this communication.

Although using an Elastic IP address can provide you with a static IP address that you can define as a source for your security groups, this may not always be feasible, especially when automatic scaling is desired.

In this example scenario, you have a distributed database that requires full internode communication for replication. If you place a cluster in us-east-1 and us-west-2, you must provide a secure method of communication between the two. Because the database uses cloud best practices, you can add or remove nodes as the load varies.

To start the process of updating your security groups, you must know when an instance has come online to trigger your workflow. Auto Scaling groups have the concept of lifecycle hooks that enable you to perform custom actions as the group launches or terminates instances.

When Auto Scaling begins to launch or terminate an instance, it puts the instance into a wait state (Pending:Wait or Terminating:Wait). The instance remains in this state while you perform your various actions until either you tell Auto Scaling to Continue, Abandon, or the timeout period ends. A lifecycle hook can trigger a CloudWatch event, publish to an Amazon SNS topic, or send to an Amazon SQS queue. For this example, you use CloudWatch Events to trigger an AWS Lambda function that updates an Amazon DynamoDB table.

Component breakdown

Here’s a quick breakdown of the components involved in this solution:

• Lambda function
• CloudWatch event
• DynamoDB table

Lambda function

The Lambda function automatically updates your security groups, in the following way:

1. Determines whether a change was triggered by your Auto Scaling group lifecycle hook or manually invoked for a “true up” functionality, which I discuss later in this post.
2. Describes the instances in the Auto Scaling group and obtain public IP addresses for each instance.
3. Updates both local and remote DynamoDB tables.
4. Compares the list of public IP addresses for both local and remote clusters with what’s already in the local region security group. Update the security group.
5. Compares the list of public IP addresses for both local and remote clusters with what’s already in the remote region security group. Update the security group
6. Signals CONTINUE back to the lifecycle hook.

CloudWatch event

The CloudWatch event triggers when an instance passes through either the launching or terminating states. When the Lambda function gets invoked, it receives an event that looks like the following:

{
	"account": "123456789012",
	"region": "us-east-1",
	"detail": {
		"LifecycleHookName": "hook-launching",
		"AutoScalingGroupName": "",
		"LifecycleActionToken": "33965228-086a-4aeb-8c26-f82ed3bef495",
		"LifecycleTransition": "autoscaling:EC2_INSTANCE_LAUNCHING",
		"EC2InstanceId": "i-017425ec54f22f994"
	},
	"detail-type": "EC2 Instance-launch Lifecycle Action",
	"source": "aws.autoscaling",
	"version": "0",
	"time": "2017-05-03T02:20:59Z",
	"id": "cb930cf8-ce8b-4b6c-8011-af17966eb7e2",
	"resources": [
		"arn:aws:autoscaling:us-east-1:123456789012:autoScalingGroup:d3fe9d96-34d0-4c62-b9bb-293a41ba3765:autoScalingGroupName/"
	]
}

DynamoDB table

You use DynamoDB to store lists of remote IP addresses in a local table that is updated by the opposite region as a failsafe source of truth. Although you can describe your Auto Scaling group for the local region, you must maintain a list of IP addresses for the remote region.

To minimize the number of describe calls and prevent an issue in the remote region from blocking your local scaling actions, we keep a list of the remote IP addresses in a local DynamoDB table. Each Lambda function in each region is responsible for updating the public IP addresses of its Auto Scaling group for both the local and remote tables.

As with all the infrastructure in this solution, there is a DynamoDB table in both regions that mirror each other. For example, the following screenshot shows a sample DynamoDB table. The Lambda function in us-east-1 would update the DynamoDB entry for us-east-1 in both tables in both regions.

By updating a DynamoDB table in both regions, it allows the local region to gracefully handle issues with the remote region, which would otherwise prevent your ability to scale locally. If the remote region becomes inaccessible, you have a copy of the latest configuration from the table that you can use to continue to sync with your security groups. When the remote region comes back online, it pushes its updated public IP addresses to the DynamoDB table. The security group is updated to reflect the current status by the remote Lambda function.

 

Walkthrough

Note: All of the following steps are performed in both regions. The Launch Stack buttons will default to the us-east-1 region.

Here’s a quick overview of the steps involved in this process:

1. An instance is launched or terminated, which triggers an Auto Scaling group lifecycle hook, triggering the Lambda function via CloudWatch Events.
2. The Lambda function retrieves the list of public IP addresses for all instances in the local region Auto Scaling group.
3. The Lambda function updates the local and remote region DynamoDB tables with the public IP addresses just received for the local Auto Scaling group.
4. The Lambda function updates the local region security group with the public IP addresses, removing and adding to ensure that it mirrors what is present for the local and remote Auto Scaling groups.
5. The Lambda function updates the remote region security group with the public IP addresses, removing and adding to ensure that it mirrors what is present for the local and remote Auto Scaling groups.

Prerequisites

To deploy this solution, you need to have Auto Scaling groups, launch configurations, and a base security group in both regions. To expedite this process, this CloudFormation template can be launched in both regions.

Step 1: Launch the AWS SAM template in the first region

To make the deployment process easy, I’ve created an AWS Serverless Application Model (AWS SAM) template, which is a new specification that makes it easier to manage and deploy serverless applications on AWS. This template creates the following resources:

• A Lambda function, to perform the various security group actions
• A DynamoDB table, to track the state of the local and remote Auto Scaling groups
• Auto Scaling group lifecycle hooks for instance launching and terminating
• A CloudWatch event, to track the EC2 Instance-Launch Lifecycle-Action and EC2 Instance-terminate Lifecycle-Action events
• A pointer from the CloudWatch event to the Lambda function, and the necessary permissions

Download the template from here or click to launch.

Upon launching the template, you’ll be presented with a list of parameters which includes the remote/local names for your Auto Scaling Groups, AWS region, Security Group IDs, DynamoDB table names, as well as where the code for the Lambda function is located. Because this is the first region you’re launching the stack in, fill out all the parameters except for the RemoteTable parameter as it hasn’t been created yet (you fill this in later).

Step 2: Test the local region

After the stack has finished launching, you can test the local region. Open the EC2 console and find the Auto Scaling group that was created when launching the prerequisite stack. Change the desired number of instances from 0 to 1.

For both regions, check your security group to verify that the public IP address of the instance created is now in the security group.

Local region:

Remote region:

Now, change the desired number of instances for your group back to 0 and verify that the rules are properly removed.

Local region:

Remote region:

Step 3: Launch in the remote region

When you deploy a Lambda function using CloudFormation, the Lambda zip file needs to reside in the same region you are launching the template. Once you choose your remote region, create an Amazon S3 bucket and upload the Lambda zip file there. Next, go to the remote region and launch the same SAM template as before, but make sure you update the CodeBucket and CodeKey parameters. Also, because this is the second launch, you now have all the values and can fill out all the parameters, specifically the RemoteTable value.

 

Step 4: Update the local region Lambda environment variable

When you originally launched the template in the local region, you didn’t have the name of the DynamoDB table for the remote region, because you hadn’t created it yet. Now that you have launched the remote template, you can perform a CloudFormation stack update on the initial SAM template. This populates the remote DynamoDB table name into the initial Lambda function’s environment variables.

In the CloudFormation console in the initial region, select the stack. Under Actions, choose Update Stack, and select the SAM template used for both regions. Under Parameters, populate the remote DynamoDB table name, as shown below. Choose Next and let the stack update complete. This updates your Lambda function and completes the setup process.

 

Step 5: Final testing

You now have everything fully configured and in place to trigger security group changes based on instances being added or removed to your Auto Scaling groups in both regions. Test this by changing the desired capacity of your group in both regions.

True up functionality
If an instance is manually added or removed from the Auto Scaling group, the lifecycle hooks don’t get triggered. To account for this, the Lambda function supports a “true up” functionality in which the function can be manually invoked. If you paste in the following JSON text for your test event, it kicks off the entire workflow. For added peace of mind, you can also have this function fire via a CloudWatch event with a CRON expression for nearly continuous checking.

{
	"detail": {
		"AutoScalingGroupName": "<your ASG name>"
	},
	"trueup":true
}

Extra credit

Now that all the resources are created in both regions, go back and break down the policy to incorporate resource-level permissions for specific security groups, Auto Scaling groups, and the DynamoDB tables.

Although this post is centered around using public IP addresses for your instances, you could instead use a VPN between regions. In this case, you would still be able to use this solution to scope down the security groups to the cluster instances. However, the code would need to be modified to support private IP addresses.

 

Conclusion

At this point, you now have a mechanism in place that captures when a new instance is added to or removed from your cluster and updates the security groups in both regions. This ensures that you are locking down your infrastructure securely by allowing access only to other cluster members.

Keep in mind that this architecture (lifecycle hooks, CloudWatch event, Lambda function, and DynamoDB table) requires that the infrastructure to be deployed in both regions, to have synchronization going both ways.

Because this Lambda function is modifying security group rules, it’s important to have an audit log of what has been modified and who is modifying them. The out-of-the-box function provides logs in CloudWatch for what IP addresses are being added and removed for which ports. As these are all API calls being made, they are logged in CloudTrail and can be traced back to the IAM role that you created for your lifecycle hooks. This can provide historical data that can be used for troubleshooting or auditing purposes.

Security is paramount at AWS. We want to ensure that customers are protecting access to their resources. This solution helps you keep your security groups in both regions automatically in sync with your Auto Scaling group resources. Let us know if you have any questions or other solutions you’ve come up with!

Implementing Default Directory Indexes in Amazon S3-backed Amazon CloudFront Origins Using [email protected]

Post Syndicated from Ronnie Eichler original https://aws.amazon.com/blogs/compute/implementing-default-directory-indexes-in-amazon-s3-backed-amazon-cloudfront-origins-using-lambdaedge/

With the recent launch of [email protected], it’s now possible for you to provide even more robust functionality to your static websites. Amazon CloudFront is a content distribution network service. In this post, I show how you can use [email protected] along with the CloudFront origin access identity (OAI) for Amazon S3 and still provide simple URLs (such as www.example.com/about/ instead of www.example.com/about/index.html).

Background

Amazon S3 is a great platform for hosting a static website. You don’t need to worry about managing servers or underlying infrastructure—you just publish your static to content to an S3 bucket. S3 provides a DNS name such as <bucket-name>.s3-website-<AWS-region>.amazonaws.com. Use this name for your website by creating a CNAME record in your domain’s DNS environment (or Amazon Route 53) as follows:

www.example.com -> <bucket-name>.s3-website-<AWS-region>.amazonaws.com

You can also put CloudFront in front of S3 to further scale the performance of your site and cache the content closer to your users. CloudFront can enable HTTPS-hosted sites, by either using a custom Secure Sockets Layer (SSL) certificate or a managed certificate from AWS Certificate Manager. In addition, CloudFront also offers integration with AWS WAF, a web application firewall. As you can see, it’s possible to achieve some robust functionality by using S3, CloudFront, and other managed services and not have to worry about maintaining underlying infrastructure.

One of the key concerns that you might have when implementing any type of WAF or CDN is that you want to force your users to go through the CDN. If you implement CloudFront in front of S3, you can achieve this by using an OAI. However, in order to do this, you cannot use the HTTP endpoint that is exposed by S3’s static website hosting feature. Instead, CloudFront must use the S3 REST endpoint to fetch content from your origin so that the request can be authenticated using the OAI. This presents some challenges in that the REST endpoint does not support redirection to a default index page.

CloudFront does allow you to specify a default root object (index.html), but it only works on the root of the website (such as http://www.example.com > http://www.example.com/index.html). It does not work on any subdirectory (such as http://www.example.com/about/). If you were to attempt to request this URL through CloudFront, CloudFront would do a S3 GetObject API call against a key that does not exist.

Of course, it is a bad user experience to expect users to always type index.html at the end of every URL (or even know that it should be there). Until now, there has not been an easy way to provide these simpler URLs (equivalent to the DirectoryIndex Directive in an Apache Web Server configuration) to users through CloudFront. Not if you still want to be able to restrict access to the S3 origin using an OAI. However, with the release of [email protected], you can use a JavaScript function running on the CloudFront edge nodes to look for these patterns and request the appropriate object key from the S3 origin.

Solution

In this example, you use the compute power at the CloudFront edge to inspect the request as it’s coming in from the client. Then re-write the request so that CloudFront requests a default index object (index.html in this case) for any request URI that ends in ‘/’.

When a request is made against a web server, the client specifies the object to obtain in the request. You can use this URI and apply a regular expression to it so that these URIs get resolved to a default index object before CloudFront requests the object from the origin. Use the following code:

'use strict';
exports.handler = (event, context, callback) => {
    
    // Extract the request from the CloudFront event that is sent to [email protected] 
    var request = event.Records[0].cf.request;

    // Extract the URI from the request
    var olduri = request.uri;

    // Match any '/' that occurs at the end of a URI. Replace it with a default index
    var newuri = olduri.replace(/\/$/, '\/index.html');
    
    // Log the URI as received by CloudFront and the new URI to be used to fetch from origin
    console.log("Old URI: " + olduri);
    console.log("New URI: " + newuri);
    
    // Replace the received URI with the URI that includes the index page
    request.uri = newuri;
    
    // Return to CloudFront
    return callback(null, request);

};

To get started, create an S3 bucket to be the origin for CloudFront:

Create bucket

On the other screens, you can just accept the defaults for the purposes of this walkthrough. If this were a production implementation, I would recommend enabling bucket logging and specifying an existing S3 bucket as the destination for access logs. These logs can be useful if you need to troubleshoot issues with your S3 access.

Now, put some content into your S3 bucket. For this walkthrough, create two simple webpages to demonstrate the functionality:  A page that resides at the website root, and another that is in a subdirectory.

<s3bucketname>/index.html

<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Root home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the root directory.</p>
    </body>
</html>

<s3bucketname>/subdirectory/index.html

<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>

When uploading the files into S3, you can accept the defaults. You add a bucket policy as part of the CloudFront distribution creation that allows CloudFront to access the S3 origin. You should now have an S3 bucket that looks like the following:

Root of bucket

Subdirectory in bucket

Next, create a CloudFront distribution that your users will use to access the content. Open the CloudFront console, and choose Create Distribution. For Select a delivery method for your content, under Web, choose Get Started.

On the next screen, you set up the distribution. Below are the options to configure:

  • Origin Domain Name:  Select the S3 bucket that you created earlier.
  • Restrict Bucket Access: Choose Yes.
  • Origin Access Identity: Create a new identity.
  • Grant Read Permissions on Bucket: Choose Yes, Update Bucket Policy.
  • Object Caching: Choose Customize (I am changing the behavior to avoid having CloudFront cache objects, as this could affect your ability to troubleshoot while implementing the Lambda code).
    • Minimum TTL: 0
    • Maximum TTL: 0
    • Default TTL: 0

You can accept all of the other defaults. Again, this is a proof-of-concept exercise. After you are comfortable that the CloudFront distribution is working properly with the origin and Lambda code, you can re-visit the preceding values and make changes before implementing it in production.

CloudFront distributions can take several minutes to deploy (because the changes have to propagate out to all of the edge locations). After that’s done, test the functionality of the S3-backed static website. Looking at the distribution, you can see that CloudFront assigns a domain name:

CloudFront Distribution Settings

Try to access the website using a combination of various URLs:

http://<domainname>/:  Works

› curl -v http://d3gt20ea1hllb.cloudfront.net/
*   Trying 54.192.192.214...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.214) port 80 (#0)
> GET / HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< ETag: "cb7e2634fe66c1fd395cf868087dd3b9"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: -D2FSRwzfcwyKZKFZr6DqYFkIf4t7HdGw2MkUF5sE6YFDxRJgi0R1g==
< Content-Length: 209
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:16 GMT
< Via: 1.1 6419ba8f3bd94b651d416054d9416f1e.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Root home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the root directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

This is because CloudFront is configured to request a default root object (index.html) from the origin.

http://<domainname>/subdirectory/:  Doesn’t work

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/
*   Trying 54.192.192.214...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.214) port 80 (#0)
> GET /subdirectory/ HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< ETag: "d41d8cd98f00b204e9800998ecf8427e"
< x-amz-server-side-encryption: AES256
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: Iqf0Gy8hJLiW-9tOAdSFPkL7vCWBrgm3-1ly5tBeY_izU82ftipodA==
< Content-Length: 0
< Content-Type: application/x-directory
< Last-Modified: Wed, 19 Jul 2017 19:21:24 GMT
< Via: 1.1 6419ba8f3bd94b651d416054d9416f1e.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

If you use a tool such like cURL to test this, you notice that CloudFront and S3 are returning a blank response. The reason for this is that the subdirectory does exist, but it does not resolve to an S3 object. Keep in mind that S3 is an object store, so there are no real directories. User interfaces such as the S3 console present a hierarchical view of a bucket with folders based on the presence of forward slashes, but behind the scenes the bucket is just a collection of keys that represent stored objects.

http://<domainname>/subdirectory/index.html:  Works

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/index.html
*   Trying 54.192.192.130...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.130) port 80 (#0)
> GET /subdirectory/index.html HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 20 Jul 2017 20:35:15 GMT
< ETag: "ddf87c487acf7cef9d50418f0f8f8dae"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: RefreshHit from cloudfront
< X-Amz-Cf-Id: bkh6opXdpw8pUomqG3Qr3UcjnZL8axxOH82Lh0OOcx48uJKc_Dc3Cg==
< Content-Length: 227
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:45 GMT
< Via: 1.1 3f2788d309d30f41de96da6f931d4ede.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

This request works as expected because you are referencing the object directly. Now, you implement the [email protected] function to return the default index.html page for any subdirectory. Looking at the example JavaScript code, here’s where the magic happens:

var newuri = olduri.replace(/\/$/, '\/index.html');

You are going to use a JavaScript regular expression to match any ‘/’ that occurs at the end of the URI and replace it with ‘/index.html’. This is the equivalent to what S3 does on its own with static website hosting. However, as I mentioned earlier, you can’t rely on this if you want to use a policy on the bucket to restrict it so that users must access the bucket through CloudFront. That way, all requests to the S3 bucket must be authenticated using the S3 REST API. Because of this, you implement a [email protected] function that takes any client request ending in ‘/’ and append a default ‘index.html’ to the request before requesting the object from the origin.

In the Lambda console, choose Create function. On the next screen, skip the blueprint selection and choose Author from scratch, as you’ll use the sample code provided.

Next, configure the trigger. Choosing the empty box shows a list of available triggers. Choose CloudFront and select your CloudFront distribution ID (created earlier). For this example, leave Cache Behavior as * and CloudFront Event as Origin Request. Select the Enable trigger and replicate box and choose Next.

Lambda Trigger

Next, give the function a name and a description. Then, copy and paste the following code:

'use strict';
exports.handler = (event, context, callback) => {
    
    // Extract the request from the CloudFront event that is sent to [email protected] 
    var request = event.Records[0].cf.request;

    // Extract the URI from the request
    var olduri = request.uri;

    // Match any '/' that occurs at the end of a URI. Replace it with a default index
    var newuri = olduri.replace(/\/$/, '\/index.html');
    
    // Log the URI as received by CloudFront and the new URI to be used to fetch from origin
    console.log("Old URI: " + olduri);
    console.log("New URI: " + newuri);
    
    // Replace the received URI with the URI that includes the index page
    request.uri = newuri;
    
    // Return to CloudFront
    return callback(null, request);

};

Next, define a role that grants permissions to the Lambda function. For this example, choose Create new role from template, Basic Edge Lambda permissions. This creates a new IAM role for the Lambda function and grants the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*:*:*"
            ]
        }
    ]
}

In a nutshell, these are the permissions that the function needs to create the necessary CloudWatch log group and log stream, and to put the log events so that the function is able to write logs when it executes.

After the function has been created, you can go back to the browser (or cURL) and re-run the test for the subdirectory request that failed previously:

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/
*   Trying 54.192.192.202...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.202) port 80 (#0)
> GET /subdirectory/ HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 20 Jul 2017 21:18:44 GMT
< ETag: "ddf87c487acf7cef9d50418f0f8f8dae"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: rwFN7yHE70bT9xckBpceTsAPcmaadqWB9omPBv2P6WkIfQqdjTk_4w==
< Content-Length: 227
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:45 GMT
< Via: 1.1 3572de112011f1b625bb77410b0c5cca.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

You have now configured a way for CloudFront to return a default index page for subdirectories in S3!

Summary

In this post, you used [email protected] to be able to use CloudFront with an S3 origin access identity and serve a default root object on subdirectory URLs. To find out some more about this use-case, see [email protected] integration with CloudFront in our documentation.

If you have questions or suggestions, feel free to comment below. For troubleshooting or implementation help, check out the Lambda forum.

Parallel Processing in Python with AWS Lambda

Post Syndicated from Oz Akan original https://aws.amazon.com/blogs/compute/parallel-processing-in-python-with-aws-lambda/

If you develop an AWS Lambda function with Node.js, you can call multiple web services without waiting for a response due to its asynchronous nature.  All requests are initiated almost in parallel, so you can get results much faster than a series of sequential calls to each web service. Considering the maximum execution duration for Lambda, it is beneficial for I/O bound tasks to run in parallel.

If you develop a Lambda function with Python, parallelism doesn’t come by default. Lambda supports Python 2.7 and Python 3.6, both of which have multiprocessing and threading modules. The multiprocessing module supports multiple cores so it is a better choice, especially for CPU intensive workloads. With the threading module, all threads are going to run on a single core though performance difference is negligible for network-bound tasks.

In this post, I demonstrate how the Python multiprocessing module can be used within a Lambda function to run multiple I/O bound tasks in parallel.

Example use case

In this example, you call Amazon EC2 and Amazon EBS API operations to find the total EBS volume size for all your EC2 instances in a region.

This is a two-step process:

  • The Lambda function calls EC2 to list all EC2 instances
  • The function calls EBS for each instance to find attached EBS volumes

Sequential Execution

If you make these calls sequentially, during the second step, your code has to loop over all the instances and wait for each response before moving to the next request.

The class named VolumesSequential has the following methods:

  • __init__ creates an EC2 resource.
  • total_size returns all EC2 instances and passes these to the instance_volumes method.
  • instance_volumes finds the total size of EBS volumes for the instance.
  • total_size adds all sizes from all instances to find total size for the EBS volumes.

Source Code for Sequential Execution

import time
import boto3

class VolumesSequential(object):
    """Finds total volume size for all EC2 instances"""
    def __init__(self):
        self.ec2 = boto3.resource('ec2')

    def instance_volumes(self, instance):
        """
        Finds total size of the EBS volumes attached
        to an EC2 instance
        """
        instance_total = 0
        for volume in instance.volumes.all():
            instance_total += volume.size
        return instance_total

    def total_size(self):
        """
        Lists all EC2 instances in the default region
        and sums result of instance_volumes
        """
        print "Running sequentially"
        instances = self.ec2.instances.all()
        instances_total = 0
        for instance in instances:
            instances_total += self.instance_volumes(instance)
        return instances_total

def lambda_handler(event, context):
    volumes = VolumesSequential()
    _start = time.time()
    total = volumes.total_size()
    print "Total volume size: %s GB" % total
    print "Sequential execution time: %s seconds" % (time.time() - _start)

Parallel Execution

The multiprocessing module that comes with Python 2.7 lets you run multiple processes in parallel. Due to the Lambda execution environment not having /dev/shm (shared memory for processes) support, you can’t use multiprocessing.Queue or multiprocessing.Pool.

If you try to use multiprocessing.Queue, you get an error similar to the following:

[Errno 38] Function not implemented: OSError
…
    sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue)
OSError: [Errno 38] Function not implemented

On the other hand, you can use multiprocessing.Pipe instead of multiprocessing.Queue to accomplish what you need without getting any errors during the execution of the Lambda function.

The class named VolumeParallel has the following methods:

  • __init__ creates an EC2 resource
  • instance_volumes finds the total size of EBS volumes attached to an instance
  • total_size finds all instances and runs instance_volumes for each to find the total size of all EBS volumes attached to all EC2 instances.

Source Code for Parallel Execution

import time
from multiprocessing import Process, Pipe
import boto3

class VolumesParallel(object):
    """Finds total volume size for all EC2 instances"""
    def __init__(self):
        self.ec2 = boto3.resource('ec2')

    def instance_volumes(self, instance, conn):
        """
        Finds total size of the EBS volumes attached
        to an EC2 instance
        """
        instance_total = 0
        for volume in instance.volumes.all():
            instance_total += volume.size
        conn.send([instance_total])
        conn.close()

    def total_size(self):
        """
        Lists all EC2 instances in the default region
        and sums result of instance_volumes
        """
        print "Running in parallel"

        # get all EC2 instances
        instances = self.ec2.instances.all()
        
        # create a list to keep all processes
        processes = []

        # create a list to keep connections
        parent_connections = []
        
        # create a process per instance
        for instance in instances:            
            # create a pipe for communication
            parent_conn, child_conn = Pipe()
            parent_connections.append(parent_conn)

            # create the process, pass instance and connection
            process = Process(target=self.instance_volumes, args=(instance, child_conn,))
            processes.append(process)

        # start all processes
        for process in processes:
            process.start()

        # make sure that all processes have finished
        for process in processes:
            process.join()

        instances_total = 0
        for parent_connection in parent_connections:
            instances_total += parent_connection.recv()[0]

        return instances_total


def lambda_handler(event, context):
    volumes = VolumesParallel()
    _start = time.time()
    total = volumes.total_size()
    print "Total volume size: %s GB" % total
    print "Sequential execution time: %s seconds" % (time.time() - _start)

Performance

There are a few differences between two Lambda functions when it comes to the execution environment. The parallel function requires more memory than the sequential one. You may run the parallel Lambda function with a relatively large memory setting to see how much memory it uses. The amount of memory required by the Lambda function depends on what the function does and how many processes it runs in parallel. To restrict maximum memory usage, you may want to limit the number of parallel executions.

In this case, when you give 1024 MB for both Lambda functions, the parallel function runs about two times faster than the sequential function. I have a handful of EC2 instances and EBS volumes in my account so the test ran way under the maximum execution limit for Lambda. Remember that parallel execution doesn’t guarantee that the runtime for the Lambda function will be under the maximum allowed duration but does speed up the overall execution time.

Sequential Run Time Output

START RequestId: 4c370b12-f9d3-11e6-b46b-b5d41afd648e Version: $LATEST
Running sequentially
Total volume size: 589 GB
Sequential execution time: 3.80066084862 seconds
END RequestId: 4c370b12-f9d3-11e6-b46b-b5d41afd648e
REPORT RequestId: 4c370b12-f9d3-11e6-b46b-b5d41afd648e Duration: 4091.59 ms Billed Duration: 4100 ms  Memory Size: 1024 MB Max Memory Used: 46 MB

Parallel Run Time Output

START RequestId: 4f1328ed-f9d3-11e6-8cd1-c7381c5c078d Version: $LATEST
Running in parallel
Total volume size: 589 GB
Sequential execution time: 1.89170885086 seconds
END RequestId: 4f1328ed-f9d3-11e6-8cd1-c7381c5c078d
REPORT RequestId: 4f1328ed-f9d3-11e6-8cd1-c7381c5c078d Duration: 2069.33 ms Billed Duration: 2100 ms  Memory Size: 1024 MB Max Memory Used: 181 MB 

Summary

In this post, I demonstrated how to run multiple I/O bound tasks in parallel by developing a Lambda function with the Python multiprocessing module. With the help of this module, you freed the CPU from waiting for I/O and fired up several tasks to fit more I/O bound operations into a given time frame. This might be the trick to reduce the overall runtime of a Lambda function especially when you have to run so many and don’t want to split the work into smaller chunks.

Kernel prepatch 4.12-rc6

Post Syndicated from corbet original https://lwn.net/Articles/725787/rss

The 4.12-rc6 kernel prepatch is out for
testing. “The good news is that rc6 is smaller than rc5 was, and I think we’re
back on track and rc5 really was big just due to random timing. We’ll
see. Next weekend when I’m back home and do rc7, I’ll see how I feel
about things. I’m still hopeful that this would be a normal release
cycle where rc7 is the last rc.

4.12-rc5 kernel prepatch has been released

Post Syndicated from jake original https://lwn.net/Articles/725099/rss

The 4.12-rc5 prepatch is out; it is rather
larger than others in this cycle, Linus Torvalds said. “It’s not like rc5 is *huge*, but it definitely isn’t the nice and
small one I was hoping for. There’s nothing in [particular] that looks
very worrisome, and it may well just be random timing – the rc sizes
do fluctuate a lot depending on just which subsystem gets synced up
that particular rc, and we may just have hit that “everybody happened
to sync up this week” case.

How to Deploy Local Administrator Password Solution with AWS Microsoft AD

Post Syndicated from Dragos Madarasan original https://aws.amazon.com/blogs/security/how-to-deploy-local-administrator-password-solution-with-aws-microsoft-ad/

Local Administrator Password Solution (LAPS) from Microsoft simplifies password management by allowing organizations to use Active Directory (AD) to store unique passwords for computers. Typically, an organization might reuse the same local administrator password across the computers in an AD domain. However, this approach represents a security risk because it can be exploited during lateral escalation attacks. LAPS solves this problem by creating unique, randomized passwords for the Administrator account on each computer and storing it encrypted in AD.

Deploying LAPS with AWS Microsoft AD requires the following steps:

  1. Install the LAPS binaries on instances joined to your AWS Microsoft AD domain. The binaries add additional client-side extension (CSE) functionality to the Group Policy client.
  2. Extend the AWS Microsoft AD schema. LAPS requires new AD attributes to store an encrypted password and its expiration time.
  3. Configure AD permissions and delegate the ability to retrieve the local administrator password for IT staff in your organization.
  4. Configure Group Policy on instances joined to your AWS Microsoft AD domain to enable LAPS. This configures the Group Policy client to process LAPS settings and uses the binaries installed in Step 1.

The following diagram illustrates the setup that I will be using throughout this post and the associated tasks to set up LAPS. Note that the AWS Directory Service directory is deployed across multiple Availability Zones, and monitoring automatically detects and replaces domain controllers that fail.

Diagram illustrating this blog post's solution

In this blog post, I explain the prerequisites to set up Local Administrator Password Solution, demonstrate the steps involved to update the AD schema on your AWS Microsoft AD domain, show how to delegate permissions to IT staff and configure LAPS via Group Policy, and demonstrate how to retrieve the password using the graphical user interface or with Windows PowerShell.

This post assumes you are familiar with Lightweight Directory Access Protocol Data Interchange Format (LDIF) files and AWS Microsoft AD. If you need more of an introduction to Directory Service and AWS Microsoft AD, see How to Move More Custom Applications to the AWS Cloud with AWS Directory Service, which introduces working with schema changes in AWS Microsoft AD.

Prerequisites

In order to implement LAPS, you must use AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD. Any instance on which you want to configure LAPS must be joined to your AWS Microsoft AD domain. You also need a Management instance on which you install the LAPS management tools.

In this post, I use an AWS Microsoft AD domain called example.com that I have launched in the EU (London) region. To see which the regions in which Directory Service is available, see AWS Regions and Endpoints.

Screenshot showing the AWS Microsoft AD domain example.com used in this blog post

In addition, you must have at least two instances launched in the same region as the AWS Microsoft AD domain. To join the instances to your AWS Microsoft AD domain, you have two options:

  1. Use the Amazon EC2 Systems Manager (SSM) domain join feature. To learn more about how to set up domain join for EC2 instances, see joining a Windows Instance to an AWS Directory Service Domain.
  2. Manually configure the DNS server addresses in the Internet Protocol version 4 (TCP/IPv4) settings of the network card to use the AWS Microsoft AD DNS addresses (172.31.9.64 and 172.31.16.191, for this blog post) and perform a manual domain join.

For the purpose of this post, my two instances are:

  1. A Management instance on which I will install the management tools that I have tagged as Management.
  2. A Web Server instance on which I will be deploying the LAPS binary.

Screenshot showing the two EC2 instances used in this post

Implementing the solution

 

1. Install the LAPS binaries on instances joined to your AWS Microsoft AD domain by using EC2 Run Command

LAPS binaries come in the form of an MSI installer and can be downloaded from the Microsoft Download Center. You can install the LAPS binaries manually, with an automation service such as EC2 Run Command, or with your existing software deployment solution.

For this post, I will deploy the LAPS binaries on my Web Server instance (i-0b7563d0f89d3453a) by using EC2 Run Command:

  1. While signed in to the AWS Management Console, choose EC2. In the Systems Manager Services section of the navigation pane, choose Run Command.
  2. Choose Run a command, and from the Command document list, choose AWS-InstallApplication.
  3. From Target instances, choose the instance on which you want to deploy the LAPS binaries. In my case, I will be selecting the instance tagged as Web Server. If you do not see any instances listed, make sure you have met the prerequisites for Amazon EC2 Systems Manager (SSM) by reviewing the Systems Manager Prerequisites.
  4. For Action, choose Install, and then stipulate the following values:
    • Parameters: /quiet
    • Source: https://download.microsoft.com/download/C/7/A/C7AAD914-A8A6-4904-88A1-29E657445D03/LAPS.x64.msi
    • Source Hash: f63ebbc45e2d080630bd62a195cd225de734131a56bb7b453c84336e37abd766
    • Comment: LAPS deployment

Leave the other options with the default values and choose Run. The AWS Management Console will return a Command ID, which will initially have a status of In Progress. It should take less than 5 minutes to download and install the binaries, after which the Command ID will update its status to Success.

Status showing the binaries have been installed successfully

If the Command ID runs for more than 5 minutes or returns an error, it might indicate a problem with the installer. To troubleshoot, review the steps in Troubleshooting Systems Manager Run Command.

To verify the binaries have been installed successfully, open Control Panel and review the recently installed applications in Programs and Features.

Screenshot of Control Panel that confirms LAPS has been installed successfully

You should see an entry for Local Administrator Password Solution with a version of 6.2.0.0 or newer.

2. Extend the AWS Microsoft AD schema

In the previous section, I used EC2 Run Command to install the LAPS binaries on an EC2 instance. Now, I am ready to extend the schema in an AWS Microsoft AD domain. Extending the schema is a requirement because LAPS relies on new AD attributes to store the encrypted password and its expiration time.

In an on-premises AD environment, you would update the schema by running the Update-AdmPwdADSchema Windows PowerShell cmdlet with schema administrator credentials. Because AWS Microsoft AD is a managed service, I do not have permissions to update the schema directly. Instead, I will update the AD schema from the Directory Service console by importing an LDIF file. If you are unfamiliar with schema updates or LDIF files, see How to Move More Custom Applications to the AWS Cloud with AWS Directory Service.

To make things easier for you, I am providing you with a sample LDIF file that contains the required AD schema changes. Using Notepad or a similar text editor, open the SchemaChanges-0517.ldif file and update the values of dc=example,dc=com with your own AWS Microsoft AD domain and suffix.

After I update the LDIF file with my AWS Microsoft AD details, I import it by using the AWS Management Console:

  1. On the Directory Service console, select from the list of directories in the Microsoft AD directory by choosing its identifier (it will look something like d-534373570ea).
  2. On the Directory details page, choose the Schema extensions tab and choose Upload and update schema.
    Screenshot showing the "Upload and update schema" option
  3. When prompted for the LDIF file that contains the changes, choose the sample LDIF file.
  4. In the background, the LDIF file is validated for errors and a backup of the directory is created for recovery purposes. Updating the schema might take a few minutes and the status will change to Updating Schema. When the process has completed, the status of Completed will be displayed, as shown in the following screenshot.

Screenshot showing the schema updates in progress
When the process has completed, the status of Completed will be displayed, as shown in the following screenshot.

Screenshot showing the process has completed

If the LDIF file contains errors or the schema extension fails, the Directory Service console will generate an error code and additional debug information. To help troubleshoot error messages, see Schema Extension Errors.

The sample LDIF file triggers AWS Microsoft AD to perform the following actions:

  1. Create the ms-Mcs-AdmPwd attribute, which stores the encrypted password.
  2. Create the ms-Mcs-AdmPwdExpirationTime attribute, which stores the time of the password’s expiration.
  3. Add both attributes to the Computer class.

3. Configure AD permissions

In the previous section, I updated the AWS Microsoft AD schema with the required attributes for LAPS. I am now ready to configure the permissions for administrators to retrieve the password and for computer accounts to update their password attribute.

As part of configuring AD permissions, I grant computers the ability to update their own password attribute and specify which security groups have permissions to retrieve the password from AD. As part of this process, I run Windows PowerShell cmdlets that are not installed by default on Windows Server.

Note: To learn more about Windows PowerShell and the concept of a cmdlet (pronounced “command-let”), go to Getting Started with Windows PowerShell.

Before getting started, I need to set up the required tools for LAPS on my Management instance, which must be joined to the AWS Microsoft AD domain. I will be using the same LAPS installer that I downloaded from the Microsoft LAPS website. In my Management instance, I have manually run the installer by clicking the LAPS.x64.msi file. On the Custom Setup page of the installer, under Management Tools, for each option I have selected Install on local hard drive.

Screenshot showing the required management tools

In the preceding screenshot, the features are:

  • The fat client UI – A simple user interface for retrieving the password (I will use it at the end of this post).
  • The Windows PowerShell module – Needed to run the commands in the next sections.
  • The GPO Editor templates – Used to configure Group Policy objects.

The next step is to grant computers in the Computers OU the permission to update their own attributes. While connected to my Management instance, I go to the Start menu and type PowerShell. In the list of results, right-click Windows PowerShell and choose Run as administrator and then Yes when prompted by User Account Control.

In the Windows PowerShell prompt, I type the following command.

Import-module AdmPwd.PS

Set-AdmPwdComputerSelfPermission –OrgUnit “OU=Computers,OU=MyMicrosoftAD,DC=example,DC=com

To grant the administrator group called Admins the permission to retrieve the computer password, I run the following command in the Windows PowerShell prompt I previously started.

Import-module AdmPwd.PS

Set-AdmPwdReadPasswordPermission –OrgUnit “OU=Computers, OU=MyMicrosoftAD,DC=example,DC=com” –AllowedPrincipals “Admins”

4. Configure Group Policy to enable LAPS

In the previous section, I deployed the LAPS management tools on my management instance, granted the computer accounts the permission to self-update their local administrator password attribute, and granted my Admins group permissions to retrieve the password.

Note: The following section addresses the Group Policy Management Console and Group Policy objects. If you are unfamiliar with or wish to learn more about these concepts, go to Get Started Using the GPMC and Group Policy for Beginners.

I am now ready to enable LAPS via Group Policy:

  1. On my Management instance (i-03b2c5d5b1113c7ac), I have installed the Group Policy Management Console (GPMC) by running the following command in Windows PowerShell.
Install-WindowsFeature –Name GPMC
  1. Next, I have opened the GPMC and created a new Group Policy object (GPO) called LAPS GPO.
  2. In the Local Group Policy Editor, I navigate to Computer Configuration > Policies > Administrative Templates > LAPS. I have configured the settings using the values in the following table.

Setting

State

Options

Password Settings

Enabled

Complexity: large letters, small letters, numbers, specials

Do not allow password expiration time longer than required by policy

Enabled

N/A

Enable local admin password management

Enabled

N/A

  1. Next, I need to link the GPO to an organizational unit (OU) in which my machine accounts sit. In your environment, I recommend testing the new settings on a test OU and then deploying the GPO to production OUs.

Note: If you choose to create a new test organizational unit, you must create it in the OU that AWS Microsoft AD delegates to you to manage. For example, if your AWS Microsoft AD directory name were example.com, the test OU path would be example.com/example/Computers/Test.

  1. To test that LAPS works, I need to make sure the computer has received the new policy by forcing a Group Policy update. While connected to the Web Server instance (i-0b7563d0f89d3453a) using Remote Desktop, I open an elevated administrative command prompt and run the following command: gpupdate /force. I can check if the policy is applied by running the command: gpresult /r | findstr LAPS GPO, where LAPS GPO is the name of the GPO created in the second step.
  2. Back on my Management instance, I can then launch the LAPS interface from the Start menu and use it to retrieve the password (as shown in the following screenshot). Alternatively, I can run the Get-ADComputer Windows PowerShell cmdlet to retrieve the password.
Get-ADComputer [YourComputerName] -Properties ms-Mcs-AdmPwd | select name, ms-Mcs-AdmPwd

Screenshot of the LAPS UI, which you can use to retrieve the password

Summary

In this blog post, I demonstrated how you can deploy LAPS with an AWS Microsoft AD directory. I then showed how to install the LAPS binaries by using EC2 Run Command. Using the sample LDIF file I provided, I showed you how to extend the schema, which is a requirement because LAPS relies on new AD attributes to store the encrypted password and its expiration time. Finally, I showed how to complete the LAPS setup by configuring the necessary AD permissions and creating the GPO that starts the LAPS password change.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the Directory Service forum.

– Dragos

AWS and the General Data Protection Regulation (GDPR)

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-and-the-general-data-protection-regulation/

European Union image

Just over a year ago, the European Commission approved and adopted the new General Data Protection Regulation (GDPR). The GDPR is the biggest change in data protection laws in Europe since the 1995 introduction of the European Union (EU) Data Protection Directive, also known as Directive 95/46/EC. The GDPR aims to strengthen the security and protection of personal data in the EU and will replace the Directive and all local laws relating to it.

AWS welcomes the arrival of the GDPR. The new, robust requirements raise the bar for data protection, security, and compliance, and will push the industry to follow the most stringent controls, helping to make everyone more secure. I am happy to announce today that all AWS services will comply with the GDPR when it becomes enforceable on May 25, 2018.

In this blog post, I explain the work AWS is doing to help customers with the GDPR as part of our continued commitment to help ensure they can comply with EU Data Protection requirements.

What has AWS been doing?

AWS continually maintains a high bar for security and compliance across all of our regions around the world. This has always been our highest priority—truly “job zero.” The AWS Cloud infrastructure has been architected to offer customers the most powerful, flexible, and secure cloud-computing environment available today. AWS also gives you a number of services and tools to enable you to build GDPR-compliant infrastructure on top of AWS.

One tool we give you is a Data Processing Agreement (DPA). I’m happy to announce today that we have a DPA that will meet the requirements of the GDPR. This GDPR DPA is available now to all AWS customers to help you prepare for May 25, 2018, when the GDPR becomes enforceable. For additional information about the new GDPR DPA or to obtain a copy, contact your AWS account manager.

In addition to account managers, we have teams of compliance experts, data protection specialists, and security experts working with customers across Europe to answer their questions and help them prepare for running workloads in the AWS Cloud after the GDPR comes into force. To further answer customers’ questions, we have updated our EU Data Protection website. This website includes information about what the GDPR is, the changes it brings to organizations operating in the EU, the services AWS offers to help you comply with the GDPR, and advice about how you can prepare.

Another topic we cover on the EU Data Protection website is AWS’s compliance with the CISPE Code of Conduct. The CISPE Code of Conduct helps cloud customers ensure that their cloud infrastructure provider is using appropriate data protection standards to protect their data in a manner consistent with the GDPR. AWS has declared that Amazon EC2, Amazon S3, Amazon RDS, AWS Identity and Access Management (IAM), AWS CloudTrail, and Amazon Elastic Block Storage (Amazon EBS) are fully compliant with the CISPE Code of Conduct. This declaration provides customers with assurances that they fully control their data in a safe, secure, and compliant environment when they use AWS. For more information about AWS’s compliance with the CISPE Code of Conduct, go to the CISPE website.

As well as giving customers a number of tools and services to build GDPR-compliant environments, AWS has achieved a number of internationally recognized certifications and accreditations. In the process, AWS has demonstrated compliance with third-party assurance frameworks such as ISO 27017 for cloud security, ISO 27018 for cloud privacy, PCI DSS Level 1, and SOC 1, SOC 2, and SOC 3. AWS also helps customers meet local security standards such as BSI’s Common Cloud Computing Controls Catalogue (C5) that is important in Germany. We will continue to pursue certifications and accreditations that are important to AWS customers.

What can you do?

Although the GDPR will not be enforceable until May 25, 2018, we are encouraging our customers and partners to start preparing now. If you have already implemented a high bar for compliance, security, and data privacy, the move to GDPR should be simple. However, if you have yet to start your journey to GDPR compliance, we urge you to start reviewing your security, compliance, and data protection processes now to ensure a smooth transition in May 2018.

You should consider the following key points in preparation for GDPR compliance:

  • Territorial reach – Determining whether the GDPR applies to your organization’s activities is essential to ensuring your organization’s ability to satisfy its compliance obligations.
  • Data subject rights – The GDPR enhances the rights of data subjects in a number of ways. You will need to make sure you can accommodate the rights of data subjects if you are processing their personal data.
  • Data breach notifications – If you are a data controller, you must report data breaches to the data protection authorities without undue delay and in any event within 72 hours of you becoming aware of a data breach.
  • Data protection officer (DPO) – You may need to appoint a DPO who will manage data security and other issues related to the processing of personal data.
  • Data protection impact assessment (DPIA) – You may need to conduct and, in some circumstances, you might be required to file with the supervisory authority a DPIA for your processing activities.
  • Data processing agreement (DPA) – You may need a DPA that will meet the requirements of the GDPR, particularly if personal data is transferred outside the European Economic Area.

AWS offers a wide range of services and features to help customers meet requirements of the GDPR, including services for access controls, monitoring, logging, and encryption. For more information about these services and features, see EU Data Protection.

At AWS, security, data protection, and compliance are our top priorities, and we will continue to work vigilantly to ensure that our customers are able to enjoy the benefits of AWS securely, compliantly, and without disruption in Europe and around the world. As we head toward May 2018, we will share more news and resources with you to help you comply with the GDPR.

– Steve

Analyze Security, Compliance, and Operational Activity Using AWS CloudTrail and Amazon Athena

Post Syndicated from Sai Sriparasa original https://aws.amazon.com/blogs/big-data/aws-cloudtrail-and-amazon-athena-dive-deep-to-analyze-security-compliance-and-operational-activity/

As organizations move their workloads to the cloud, audit logs provide a wealth of information on the operations, governance, and security of assets and resources. As the complexity of the workloads increases, so does the volume of audit logs being generated. It becomes increasingly difficult for organizations to analyze and understand what is happening in their accounts without a significant investment of time and resources.

AWS CloudTrail and Amazon Athena help make it easier by combining the detailed CloudTrail log files with the power of the Athena SQL engine to easily find, analyze, and respond to changes and activities in an AWS account.

AWS CloudTrail records API calls and account activities and publishes the log files to Amazon S3. Account activity is tracked as an event in the CloudTrail log file. Each event carries information such as who performed the action, when the action was done, which resources were impacted, and many more details. Multiple events are stitched together and structured in a JSON format within the CloudTrail log files.

Amazon Athena uses Apache Hive’s data definition language (DDL) to create tables and Presto, a distributed SQL engine, to run queries. Apache Hive does not natively support files in JSON, so we’ll have to use a SerDe to help Hive understand how the records should be processed. A SerDe interface is a combination of a serializer and deserializer. A deserializer helps take data and convert it into a Java object while the serializer helps convert the Java object into a usable representation.

In this blog post, we will walk through how to set up and use the recently released Amazon Athena CloudTrail SerDe to query CloudTrail log files for EC2 security group modifications, console sign-in activity, and operational account activity. This post assumes that customers already have AWS CloudTrail configured. For more information about configuring CloudTrail, see Getting Started with AWS CloudTrail in the AWS CloudTrail User Guide.

Setting up Amazon Athena

Let’s start by signing in to the Amazon Athena console and performing the following steps.

o_athena-cloudtrail_1

Create a table in the default sampledb database using the CloudTrail SerDe. The easiest way to create the table is to copy and paste the following query into the Athena query editor, modify the LOCATION value, and then run the query.

Replace:

LOCATION 's3://<Your CloudTrail s3 bucket>/AWSLogs/<optional – AWS_Account_ID>/'

with the S3 bucket where your CloudTrail log files are delivered. For example, if your CloudTrail S3 bucket is named “aws -sai-sriparasa” and you set up a log file prefix of  “/datalake/cloudtrail/” you would edit the LOCATION statement as follows:

LOCATION 's3://aws-sai-sriparasa/datalake/cloudtrail/'

CREATE EXTERNAL TABLE cloudtrail_logs (
eventversion STRING,
userIdentity STRUCT<
  type:STRING,
  principalid:STRING,
  arn:STRING,
  accountid:STRING,
  invokedby:STRING,
  accesskeyid:STRING,
  userName:STRING,
  sessioncontext:STRUCT<
    attributes:STRUCT<
      mfaauthenticated:STRING,
      creationdate:STRING>,
    sessionIssuer:STRUCT<
      type:STRING,
      principalId:STRING,
      arn:STRING,
      accountId:STRING,
      userName:STRING>>>,
eventTime STRING,
eventSource STRING,
eventName STRING,
awsRegion STRING,
sourceIpAddress STRING,
userAgent STRING,
errorCode STRING,
errorMessage STRING,
requestParameters STRING,
responseElements STRING,
additionalEventData STRING,
requestId STRING,
eventId STRING,
resources ARRAY<STRUCT<
  ARN:STRING,accountId:
  STRING,type:STRING>>,
eventType STRING,
apiVersion STRING,
readOnly STRING,
recipientAccountId STRING,
serviceEventDetails STRING,
sharedEventID STRING,
vpcEndpointId STRING
)
ROW FORMAT SERDE 'com.amazon.emr.hive.serde.CloudTrailSerde'
STORED AS INPUTFORMAT 'com.amazon.emr.cloudtrail.CloudTrailInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION 's3://<Your CloudTrail s3 bucket>/AWSLogs/<optional – AWS_Account_ID>/';

After the query has been executed, a new table named cloudtrail_logs will be added to Athena with the following table properties.

Table_properties_sai3

Athena charges you by the amount of data scanned per query.  You can save on costs and get better performance when querying CloudTrail log files by partitioning the data to the time ranges you are interested in.  For more information on pricing, see Athena pricing.  To better understand how to partition data for use in Athena, see Analyzing Data in S3 using Amazon Athena.

Popular use cases

These use cases focus on:

  • Amazon EC2 security group modifications
  • Console Sign-in activity
  • Operational account activity

EC2 security group modifications

When reviewing an operational issue or security incident for an EC2 instance, the ability to see any associated security group change is a vital part of the analysis.

For example, if an EC2 instance triggers a CloudWatch metric alarm for high CPU utilization, we can first look to see if there have been any security group changes (the addition of new security groups or the addition of ingress rules to an existing security group) that potentially create more traffic or load on the instance. To start the investigation, we need to look in the EC2 console for the network interface ID and security groups of the impacted EC2 instance. Here is an example:

Network interface ID = eni-6c5ca5a8

Security group(s) = sg-5887f224, sg-e214609e

The following query can help us dive deep into the security group analysis. We’ll configure the query to filter for our network interface ID, security groups, and a time range starting 12 hours before the alarm occurred so we’re aware of recent changes. (CloudTrail log files use the ISO 8601 data elements and interchange format for date and time representation.)

Identify any security group changes for our EC2 instance:

select eventname, useridentity.username, sourceIPAddress, eventtime, requestparameters from cloudtrail_logs
where (requestparameters like '%sg-5887f224%' or requestparameters like '%sg-e214609e%' or requestparameters like '%eni-6c5ca5a8%')
and eventtime > '2017-02-15T00:00:00Z'
order by eventtime asc;

This query returned the following results:

eventname username sourceIPAddress eventtime requestparameters
DescribeInstances 72.21.196.68 2017-02-15T00:57:23Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-5887f224″}]}}]}}
DescribeInstances 72.21.196.68 2017-02-15T00:57:24Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-e214609e”}]}}]}}
DescribeInstances 72.21.196.68 2017-02-15T17:06:01Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-e214609e”}]}}]}}
DescribeInstances 72.21.196.68 2017-02-15T17:06:01Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-5887f224″}]}}]}}
DescribeSecurityGroups 72.21.196.70 2017-02-15T23:28:20Z {“securityGroupSet”:{},”securityGroupIdSet”:{“items”:[{“groupId”:”sg-e214609e”}]},”filterSet”:{}}
DescribeInstances 72.21.196.69 2017-02-16T11:25:23Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-e214609e”}]}}]}}
DescribeInstances 72.21.196.69 2017-02-16T11:25:23Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-5887f224″}]}}]}}
ModifyNetworkInterfaceAttribute bobodell 72.21.196.64 2017-02-16T19:09:55Z {“networkInterfaceId”:”eni-6c5ca5a8″,”groupSet”:{“items”:[{“groupId”:”sg-e214609e”},{“groupId”:”sg-5887f224″}]}}
AuthorizeSecurityGroupIngress bobodell 72.21.196.64 2017-02-16T19:42:02Z {“groupId”:”sg-5887f224″,”ipPermissions”:{“items”:[{“ipProtocol”:”tcp”,”fromPort”:143,”toPort”:143,”groups”:{},”ipRanges”:{“items”:[{“cidrIp”:”0.0.0.0/0″}]},”ipv6Ranges”:{},”prefixListIds”:{}},{“ipProtocol”:”tcp”,”fromPort”:143,”toPort”:143,”groups”:{},”ipRanges”:{},”ipv6Ranges”:{“items”:[{“cidrIpv6″:”::/0″}]},”prefixListIds”:{}}]}}

The results show that the ModifyNetworkInterfaceAttribute and AuthorizedSecurityGroupIngress API calls may have impacted the EC2 instance. The first call was initiated by user bobodell and set two security groups to the EC2 instance. The second call, also initiated by user bobodell,  was made approximately 33 minutes later, and successfully opened TCP port 143 (IMAP) up to the world (cidrip:0.0.0.0/0).

Although these changes may have been authorized, these details can be used to piece together a timeline of activity leading up to the alarm.

Console Sign-in activity

Whether it’s to help meet a compliance standard such as PCI, adhering to a best practice security framework such as NIST, or just wanting to better understand who is accessing your assets, auditing your login activity is vital.

The following query can help identify the AWS Management Console logins that occurred over a 24-hour period. It returns details such as user name, IP address, time of day, whether the login was from a mobile console version, and whether multi-factor authentication was used.

select useridentity.username, sourceipaddress, eventtime, additionaleventdata
from default.cloudtrail_logs
where eventname = 'ConsoleLogin'
and eventtime >= '2017-02-17T00:00:00Z'
and eventtime < '2017-02-18T00:00:00Z';

Because potentially hundreds of logins occur every day, it’s important to identify those that seem to be outside the normal course of business. The following query returns logins that occurred outside our network (72.21.0.0/24), those that occurred using a mobile console version, and those that occurred between midnight and 5:00 A.M.

select useridentity.username, sourceipaddress, json_extract_scalar(additionaleventdata, '$.MobileVersion') as MobileVersion, eventtime, additionaleventdata
from default.cloudtrail_logs 
where eventname = 'ConsoleLogin' 
and (json_extract_scalar(additionaleventdata, '$.MobileVersion') = 'Yes' 
or sourceipaddress not like '72.21.%' 
and eventtime >= '2017-02-17T00:00:00Z'
and eventtime < '2017-02-17T05:00:00Z');

Operational account activity

An important part of running workloads in AWS is understanding recurring errors, how administrators and employees are interacting with your workloads, and who or what is using root privileges in your account.

AWS event errors

Recurring error messages can be a sign of an incorrectly configured policy, the wrong permissions applied to an application, or an unknown change in your workloads. The following query shows the top 10 errors that have occurred from the start of the year.

select count (*) as TotalEvents, eventname, errorcode, errormessage 
from cloudtrail_logs
where errorcode is not null
and eventtime >= '2017-01-01T00:00:00Z' 
group by eventname, errorcode, errormessage
order by TotalEvents desc
limit 10;

The results show:

TotalEvents eventname errorcode errormessage
1098 DescribeAlarms ValidationException 1 validation error detected: Value ‘INVALID_FOR_SUMMARY’ at ‘stateValue’ failed to satisfy constraint: Member must satisfy enum value set: [INSUFFICIENT_DATA, ALARM, OK]
182 GetBucketPolicy NoSuchBucketPolicy The bucket policy does not exist
179 HeadBucket AccessDenied Access Denied
48 GetAccountPasswordPolicy NoSuchEntityException The Password Policy with domain name 341277845616 cannot be found.
36 GetBucketTagging NoSuchTagSet The TagSet does not exist
36 GetBucketReplication ReplicationConfigurationNotFoundError The replication configuration was not found
36 GetBucketWebsite NoSuchWebsiteConfiguration The specified bucket does not have a website configuration
32 DescribeNetworkInterfaces Client.RequestLimitExceeded Request limit exceeded.
30 GetBucketCors NoSuchCORSConfiguration The CORS configuration does not exist
30 GetBucketLifecycle NoSuchLifecycleConfiguration The lifecycle configuration does not exist

These errors might indicate an incorrectly configured CloudWatch alarm or S3 bucket policy.

Top IAM users

The following query shows the top IAM users and activities by eventname from the beginning of the year.

select count (*) as TotalEvents, useridentity.username, eventname
from cloudtrail_logs
where eventtime >= '2017-01-01T00:00:00Z' 
and useridentity.type = 'IAMUser'
group by useridentity.username, eventname
order by TotalEvents desc;

The results will show the total activities initiated by each IAM user and the eventname for those activities.

Like the Console sign-in activity query in the previous section, this query could be modified to filter the activity to view only events that occurred outside of the known network or after hours.

Root activity

Another useful query is to understand how the root account and credentials are being used and which activities are being performed by root.

The following query will look at the top events initiated by root from the beginning of the year. It will show whether these were direct root activities or whether they were invoked by an AWS service (and, if so, which one) to perform an activity.

select count (*) as TotalEvents, eventname, useridentity.invokedby
from cloudtrail_logs
where eventtime >= '2017-01-01T00:00:00Z' 
and useridentity.type = 'Root'
group by useridentity.username, eventname, useridentity.invokedby
order by TotalEvents desc;

Summary

 AWS CloudTrail and Amazon Athena are a powerful combination that can help organizations better understand the operations, governance, and security of assets and resources in their AWS accounts without a significant investment of time and resources.


About the Authors

 

Sai_Author_pic_resizeSai Sriparasa is a consultant with AWS Professional Services. He works with our customers to provide strategic and tactical big data solutions with an emphasis on automation, operations & security on AWS. In his spare time, he follows sports and current affairs.

 

 

 

BobO_Author_pic2_resizeBob O’Dell is a Sr. Product Manager for AWS CloudTrail. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of AWS accounts.  Bob enjoys working with customers to understand how CloudTrail can meet their needs and continue to be an integral part of their solutions going forward.  In his spare time, he enjoys spending time with HRB exploring the new world of yoga and adventuring through the Pacific Northwest.


Related

Analyzing Data in S3 using Amazon Athena

Sai_related_image

Wednesday’s security advisories

Post Syndicated from ris original https://lwn.net/Articles/715261/rss

CentOS has updated firefox (C7; C6; C5: multiple vulnerabilities).

Debian has updated tomcat7
(regression in previous update) and tomcat8
(regression in previous update).

Gentoo has updated archive-tar-minitar (file overwrites) and ghostscript-gpl (multiple vulnerabilities).

openSUSE has updated profanity
(42.2, 42.1: user impersonation).

SUSE has updated php7 (SLE12: multiple vulnerabilities).

Ubuntu has updated kernel (14.04:
three vulnerabilities), linux, linux-raspi2
(16.10: three vulnerabilities), linux,
linux-snapdragon
(16.04: multiple vulnerabilities), linux, linux-ti-omap4 (12.04: three
vulnerabilities), linux-lts-trusty (12.04:
three vulnerabilities), linux-lts-xenial
(14.04: multiple vulnerabilities), and tcpdump (multiple vulnerabilities).

Extending AWS CodeBuild with Custom Build Environments

Post Syndicated from John Pignata original https://aws.amazon.com/blogs/devops/extending-aws-codebuild-with-custom-build-environments/

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. CodeBuild provides curated build environments for programming languages and runtimes such as Java, Ruby, Python, Go, Node.js, Android, and Docker. It can be extended through the use of custom build environments to support many more.

Build environments are Docker images that include a complete file system with everything required to build and test your project. To use a custom build environment in a CodeBuild project, you build a container image for your platform that contains your build tools, push it to a Docker container registry such as Amazon EC2 Container Registry (ECR), and reference it in the project configuration. When building your application, CodeBuild will retrieve the Docker image from the container registry specified in the project configuration and use the environment to compile your source code, run your tests, and package your application.

In this post, we’ll create a build environment for PHP applications and walk through the steps to configure CodeBuild to use this environment.

Requirements

In order to follow this tutorial and build the Docker container image, you need to have the Docker platform, the AWS Command Line Interface, and Git installed.

Create the demo resources

To begin, we’ll clone codebuild-images from GitHub. It contains an AWS CloudFormation template that we’ll use to create resources for our demo: a source code repository in AWS CodeCommit and a Docker image repository in Amazon ECR. The repository also includes PHP sample code and tests that we’ll use to demonstrate our custom build environment.

  1. Clone the Git repository:
    git clone https://github.com/awslabs/codebuild-images.git
    cd codebuild-images

  2. Create the CloudFormation stack using the template.yml file. You can use the CloudFormation console to create the stack or you can use the AWS Command Line Interface:
    aws cloudformation create-stack \
     --stack-name codebuild-php \
     --parameters ParameterKey=EnvName,ParameterValue=php \
     --template-body file://template.yml > /dev/null && \
    aws cloudformation wait stack-create-complete \
     --stack-name codebuild-php && \
    aws cloudformation describe-stacks \
     --stack-name codebuild-php \
     --output table \
     --query Stacks[0].Outputs

After the stack has been created, CloudFormation will return two outputs:

  • BuildImageRepositoryUri: the URI of the Docker repository that will host our build environment image.
  • SourceCodeRepositoryCloneUrl: the clone URL of the Git repository that will host our sample PHP code.

Build and push the Docker image

Docker images are specified using a Dockerfile, which contains the instructions for assembling the image. The Dockerfile included in the PHP build environment contains these instructions:

FROM php:7

ARG composer_checksum=55d6ead61b29c7bdee5cccfb50076874187bd9f21f65d8991d46ec5cc90518f447387fb9f76ebae1fbbacf329e583e30
ARG composer_url=https://raw.githubusercontent.com/composer/getcomposer.org/ba0141a67b9bd1733409b71c28973f7901db201d/web/installer

ENV COMPOSER_ALLOW_SUPERUSER=1
ENV PATH=$PATH:vendor/bin

RUN apt-get update && apt-get install -y --no-install-recommends \
      curl \
      git \
      python-dev \
      python-pip \
      zlib1g-dev \
 && pip install awscli \
 && docker-php-ext-install zip \
 && curl -o installer "$composer_url" \
 && echo "$composer_checksum *installer" | shasum –c –a 384 \
 && php installer --install-dir=/usr/local/bin --filename=composer \
 && rm -rf /var/lib/apt/lists/*

This Dockerfile inherits all of the instructions from the official PHP Docker image, which installs the PHP runtime. On top of that base image, the build process will install Python, Git, the AWS CLI, and Composer, a dependency management tool for PHP. We’ve installed the AWS CLI and Git as tools we can use during builds. For example, using the AWS CLI, we could trigger a notification from Amazon Simple Notification Service (SNS) when a build is complete or we could use Git to create a new tag to mark a successful build. Finally, the build process cleans up files created by the packaging tools, as recommended in Best practices for writing Dockerfiles.

Next, we’ll build and push the custom build environment.

  1. Provide authentication details for our registry to the local Docker engine by executing the output of the login helper provided by the AWS CLI:
    aws ecr get-login
    

  2. Build and push the Docker image. We’ll use the repository URI returned in the CloudFormation stack output (BuildImageRepositoryUri) as the image tag:
    cd php
    docker build -t [BuildImageRepositoryUri] .
    docker push [BuildImageRepositoryUri]

After running these commands, your Docker image is pushed into Amazon ECR and ready to build your project.

Configure the Git repository

The repository we cloned includes a small PHP sample that we can use to test our PHP build environment. The sample function converts Roman numerals to Arabic numerals. The repository also includes a sample test to exercise this function. The sample also includes a YAML file called a build spec that contains commands and related settings that CodeBuild uses to run a build:

version: 0.1
phases:
  pre_build:
    commands:
      - composer install
  build:
    commands:
      - phpunit tests

This build spec configures CodeBuild to run two commands during the build:

We will push the sample application to the CodeCommit repo created by the CloudFormation stack. You’ll need to grant your IAM user the required level of access to the AWS services required for CodeCommit and you’ll need to configure your Git client with the appropriate credentials. See Setup for HTTPS Users Using Git Credentials in the CodeCommit documentation for detailed steps.

We’re going to initialize a Git repository for our sample, configure our origin, and push the sample to the master branch in CodeCommit.

  1. Initialize a new Git repository in the sample directory:
    cd sample
    git init

  2. Add and commit the sample files to the repository:
    git add .
    git commit -m "Initial commit"

  3. Configure the git remote and push the sample to it. We’ll use the repository clone URL returned in the CloudFormation stack output (SourceCodeRepositoryCloneUrl) as the remote URL:
    git remote add origin [SourceCodeRepositoryCloneUrl]
    git push origin master

Now that our sample application has been pushed into source control and our build environment image has been pushed into our Docker registry, we’re ready to create a CodeBuild project and start our first build.

Configure the CodeBuild project

In this section, we’ll walk through the steps for configuring CodeBuild to use the custom build environment.

Screenshot of the build configuration

  1. In the AWS Management Console, open the AWS CodeBuild console, and then choose Create project.
  2. In Project name, type php-demo.
  3. From Source provider, choose AWS CodeCommit.  From Repository, choose codebuild-sample-php.
  4. In Environment image, select Specify a Docker image. From Custom image type, choose Amazon ECR. From Amazon ECR Repository, choose codebuild/php.  From Amazon ECR image, choose latest.
  5. In Build specification, select Use the buildspec.yml in the source code root directory.
  6. In Artifacts type, choose No artifacts.
  7. Choose Continue and then choose Save and Build.
  8. On the next page, from Branch, choose master and then choose Start build.

CodeBuild will pull the build environment image from Amazon ECR and use it to test our application. CodeBuild will show us the status of each build step, the last 20 lines of log messages generated by the build process, and provide a link to Amazon CloudWatch Logs for more debugging output.

Screenshot of the build output

Summary

CodeBuild supports a number of platforms and languages out of the box. By using custom build environments, it can be extended to other runtimes. In this post, we built a PHP environment and demonstrated how to use it to test PHP applications.

We’re excited to see how customers extend and use CodeBuild to enable continuous integration and continuous delivery for their applications. Please leave questions or suggestions in the comments or share what you’ve learned extending CodeBuild for your own projects.

Security updates for Tuesday

Post Syndicated from ris original https://lwn.net/Articles/714499/rss

CentOS has updated java-1.7.0-openjdk (C7; C6; C5: multiple vulnerabilities).

Debian has updated tomcat7 (denial of service), tomcat8 (denial of service), and vim (buffer overflow).

Debian-LTS has updated tomcat7 (denial of service).

Fedora has updated bind (F25:
denial of service), kernel (F25; F24: two vulnerabilities), netpbm (F25: three vulnerabilities), tcpdump (F25: multiple vulnerabilities), vim (F25: buffer overflow), and w3m (F25: unspecified).

Gentoo has updated openssl (multiple vulnerabilities) and virtualbox (multiple vulnerabilities).

openSUSE has updated kernel (42.2; 42.1: multiple vulnerabilities).

Oracle has updated java-1.7.0-openjdk (OL7; OL6; OL5: multiple vulnerabilities).

Tuesday’s security advisories

Post Syndicated from ris original https://lwn.net/Articles/713877/rss

Debian-LTS has updated tiff (can’t write files).

Fedora has updated kernel (F25; F24:
denial of service), moodle (F25: multiple vulnerabilities), and phpMyAdmin (F25; F24: multiple vulnerabilities).

Mageia has updated icoutils (multiple vulnerabilities) and irssi-otr (information leak).

openSUSE has updated libgit2
(SPH for SLE12: multiple vulnerabilities) and libressl (42.2, 42.1: local timing attack).

Oracle has updated kernel 4.1.12 (OL7; OL6: multiple vulnerabilities) and ntp (OL7; OL6: multiple vulnerabilities).

SUSE has updated mysql (SOSC5,
SMP2.1, SM2.1, SLE11-SP3,4: multiple vulnerabilities) and kernel (SLERTE12-SP1: multiple vulnerabilities).

Ubuntu has updated nettle
(information leak), squid3 (two
vulnerabilities), firefox (regression in
previous update), and webkit2gtk (16.10,
16.04: multiple vulnerabilities).