Over the coming weeks, we’ll be adding enhanced domain protections to Amazon CloudFront. The short version is this: the new measures are designed to ensure that requests handled by CloudFront are handled on behalf of legitimate domain owners.
Using CloudFront to receive traffic for a domain you aren’t authorized to use is already a violation of our AWS Terms of Service. When we become aware of this type of activity, we deal with it behind the scenes by disabling abusive accounts. Now we’re integrating checks directly into the CloudFront API and Content Distribution service, as well.
Enhanced Protection against Dangling DNS entries To use CloudFront with your domain, you must configure your domain to point at CloudFront. You may use a traditional CNAME, or an Amazon Route 53 “ALIAS” record.
A problem can arise if you delete your CloudFront distribution, but leave your DNS still pointing at CloudFront, popularly known as a “dangling” DNS entry. Thankfully, this is very rare, as the domain will no longer work, but we occasionally see customers who leave their old domains dormant. This can also happen if you leave this kind of “dangling” DNS entry pointing at other infrastructure you no longer control. For example, if you leave a domain pointing at an IP address that you don’t control, then there is a risk that someone may come along and “claim” traffic destined for your domain.
In an even more rare set of circumstances, an abuser can exploit a subdomain of a domain that you are actively using. For example, if a customer left “images.example.com” dangling and pointing to a deleted CloudFront distribution which is no longer in use, but they still actively use the parent domain “example.com”, then an abuser could come along and register “images.example.com” as an alternative name on their own distribution and claim traffic that they aren’t entitled to. This also means that cookies may be set and intercepted for HTTP traffic potentially including the parent domain. HTTPS traffic remains protected if you’ve removed the certificate associated with the original CloudFront distribution.
Of course, the best fix for this kind of risk is not to leave dangling DNS entries in the first place. Earlier in February, 2018, we added a new warning to our systems. With this warning, if you remove an alternate domain name from a distribution, you are reminded to delete any DNS entries that may still be pointing at CloudFront.
We also have long-standing checks in the CloudFront API that ensure this kind of domain claiming can’t occur when you are using wildcard domains. If you attempt to add *.example.com to your CloudFront distribution, but another account has already registered www.example.com, then the attempt will fail.
With the new enhanced domain protection, CloudFront will now also check your DNS whenever you remove an alternate domain. If we determine that the domain is still pointing at your CloudFront distribution, the API call will fail and no other accounts will be able to claim this traffic in the future.
Enhanced Protection against Domain Fronting CloudFront will also be soon be implementing enhanced protections against so-called “Domain Fronting”. Domain Fronting is when a non-standard client makes a TLS/SSL connection to a certain name, but then makes a HTTPS request for an unrelated name. For example, the TLS connection may connect to “www.example.com” but then issue a request for “www.example.org”.
In certain circumstances this is normal and expected. For example, browsers can re-use persistent connections for any domain that is listed in the same SSL Certificate, and these are considered related domains. But in other cases, tools including malware can use this technique between completely unrelated domains to evade restrictions and blocks that can be imposed at the TLS/SSL layer.
To be clear, this technique can’t be used to impersonate domains. The clients are non-standard and are working around the usual TLS/SSL checks that ordinary clients impose. But clearly, no customer ever wants to find that someone else is masquerading as their innocent, ordinary domain. Although these cases are also already handled as a breach of our AWS Terms of Service, in the coming weeks we will be checking that the account that owns the certificate we serve for a particular connection always matches the account that owns the request we handle on that connection. As ever, the security of our customers is our top priority, and we will continue to provide enhanced protection against misconfigurations and abuse from unrelated parties.
Interested in additional AWS Security news? Follow the AWS Security Blog on Twitter.
In a multi-account environment where you require connectivity between accounts, and perhaps connectivity between cloud and on-premises workloads, the demand for a robust Domain Name Service (DNS) that’s capable of name resolution across all connected environments will be high.
The most common solution is to implement local DNS in each account and use conditional forwarders for DNS resolutions outside of this account. While this solution might be efficient for a single-account environment, it becomes complex in a multi-account environment.
In this post, I will provide a solution to implement central DNS for multiple accounts. This solution reduces the number of DNS servers and forwarders needed to implement cross-account domain resolution. I will show you how to configure this solution in four steps:
Set up your Central DNS account.
Set up each participating account.
Create Route53 associations.
Configure on-premises DNS (if applicable).
Solution overview
In this solution, you use AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) as a DNS service in a dedicated account in a Virtual Private Cloud (DNS-VPC).
The DNS service included in AWS Managed Microsoft AD uses conditional forwarders to forward domain resolution to either Amazon Route 53 (for domains in the awscloud.com zone) or to on-premises DNS servers (for domains in the example.com zone). You’ll use AWS Managed Microsoft AD as the primary DNS server for other application accounts in the multi-account environment (participating accounts).
A participating account is any application account that hosts a VPC and uses the centralized AWS Managed Microsoft AD as the primary DNS server for that VPC. Each participating account has a private, hosted zone with a unique zone name to represent this account (for example, business_unit.awscloud.com).
You associate the DNS-VPC with the unique hosted zone in each of the participating accounts, this allows AWS Managed Microsoft AD to use Route 53 to resolve all registered domains in private, hosted zones in participating accounts.
The following diagram shows how the various services work together:
Figure 1: Diagram showing the relationship between all the various services
In this diagram, all VPCs in participating accounts use Dynamic Host Configuration Protocol (DHCP) option sets. The option sets configure EC2 instances to use the centralized AWS Managed Microsoft AD in DNS-VPC as their default DNS Server. You also configure AWS Managed Microsoft AD to use conditional forwarders to send domain queries to Route53 or on-premises DNS servers based on query zone. For domain resolution across accounts to work, we associate DNS-VPC with each hosted zone in participating accounts.
If, for example, server.pa1.awscloud.com needs to resolve addresses in the pa3.awscloud.com domain, the sequence shown in the following diagram happens:
Figure 2: How domain resolution across accounts works
1.1: server.pa1.awscloud.com sends domain name lookup to default DNS server for the name server.pa3.awscloud.com. The request is forwarded to the DNS server defined in the DHCP option set (AWS Managed Microsoft AD in DNS-VPC).
1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.
Similarly, if server.example.com needs to resolve server.pa3.awscloud.com, the following happens:
2.1: server.example.com sends domain name lookup to on-premise DNS server for the name server.pa3.awscloud.com.
2.2: on-premise DNS server using conditional forwarder forwards domain lookup to AWS Managed Microsoft AD in DNS-VPC.
1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.
Step 1: Set up a centralized DNS account
In previous AWS Security Blog posts, Drew Dennis covered a couple of options for establishing DNS resolution between on-premises networks and Amazon VPC. In this post, he showed how you can use AWS Managed Microsoft AD (provisioned with AWS Directory Service) to provide DNS resolution with forwarding capabilities.
To set up a centralized DNS account, you can follow the same steps in Drew’s post to create AWS Managed Microsoft AD and configure the forwarders to send DNS queries for awscloud.com to default, VPC-provided DNS and to forward example.com queries to the on-premise DNS server.
Here are a few considerations while setting up central DNS:
The VPC that hosts AWS Managed Microsoft AD (DNS-VPC) will be associated with all private hosted zones in participating accounts.
To be able to resolve domain names across AWS and on-premises, connectivity through Direct Connect or VPN must be in place.
Step 2: Set up participating accounts
The steps I suggest in this section should be applied individually in each application account that’s participating in central DNS resolution.
Create the VPC(s) that will host your resources in participating account.
Create VPC Peering between local VPC(s) in each participating account and DNS-VPC.
Create a private hosted zone in Route 53. Hosted zone domain names must be unique across all accounts. In the diagram above, we used pa1.awscloud.com / pa2.awscloud.com / pa3.awscloud.com. You could also use a combination of environment and business unit: for example, you could use pa1.dev.awscloud.com to achieve uniqueness.
Associate VPC(s) in each participating account with the local private hosted zone.
The next step is to change the default DNS servers on each VPC using DHCP option set:
Follow these steps to create a new DHCP option set. Make sure in the DNS Servers to put the private IP addresses of the two AWS Managed Microsoft AD servers that were created in DNS-VPC:
Figure 3: The “Create DHCP options set” dialog box
Follow these steps to assign the DHCP option set to your VPC(s) in participating account.
Step 3: Associate DNS-VPC with private hosted zones in each participating account
The next steps will associate DNS-VPC with the private, hosted zone in each participating account. This allows instances in DNS-VPC to resolve domain records created in these hosted zones. If you need them, here are more details on associating a private, hosted zone with VPC on a different account.
In each participating account, create the authorization using the private hosted zone ID from the previous step, the region, and the VPC ID that you want to associate (DNS-VPC).
After completing these steps, AWS Managed Microsoft AD in the centralized DNS account should be able to resolve domain records in the private, hosted zone in each participating account.
Step 4: Setting up on-premises DNS servers
This step is necessary if you would like to resolve AWS private domains from on-premises servers and this task comes down to configuring forwarders on-premise to forward DNS queries to AWS Managed Microsoft AD in DNS-VPC for all domains in the awscloud.com zone.
The steps to implement conditional forwarders vary by DNS product. Follow your product’s documentation to complete this configuration.
Summary
I introduced a simplified solution to implement central DNS resolution in a multi-account environment that could be also extended to support DNS resolution between on-premise resources and AWS. This can help reduce operations effort and the number of resources needed to implement cross-account domain resolution.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Directory Service forum or contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
Today, I’m excited to announce the launch of .BOT, a new generic top-level domain (gTLD) from Amazon. Customers can use .BOT domains to provide an identity and portal for their bots. Fitness bots, slack bots, e-commerce bots, and more can all benefit from an easy-to-access .BOT domain. The phrase “bot” was the 4th most registered domain keyword within the .COM TLD in 2016 with more than 6000 domains per month. A .BOT domain allows customers to provide a definitive internet identity for their bots as well as enhancing SEO performance.
Below, I’ll walk through the experience of registering and provisioning a domain for my bot, whereml.bot. Then we’ll look at setting up the domain as a hosted zone in Amazon Route 53. Let’s get started.
Registering a .BOT domain
First, I’ll head over to https://amazonregistry.com/bot, type in a new domain, and click magnifying class to make sure my domain is available and get taken to the registration wizard.
Next, I have the opportunity to choose how I want to verify my bot. I build all of my bots with Amazon Lex so I’ll select that in the drop down and get prompted for instructions specific to AWS. If I had my bot hosted somewhere else I would need to follow the unique verification instructions for that particular framework.
To verify my Lex bot I need to give the Amazon Registry permissions to invoke the bot and verify it’s existence. I’ll do this by creating an AWS Identity and Access Management (IAM) cross account role and providing the AmazonLexReadOnly permissions to that role. This is easily accomplished in the AWS Console. Be sure to provide the account number and external ID shown on the registration page.
Now I’ll add read only permissions to our Amazon Lex bots.
I’ll give my role a fancy name like DotBotCrossAccountVerifyRole and a description so it’s easy to remember why I made this then I’ll click create to create the role and be transported to the role summary page.
Finally, I’ll copy the ARN from the created role and save it for my next step.
Here I’ll add all the details of my Amazon Lex bot. If you haven’t made a bot yet you can follow the tutorial to build a basic bot. I can refer to any alias I’ve deployed but if I just want to grab the latest published bot I can pass in $LATEST as the alias. Finally I’ll click Validate and proceed to registering my domain.
Amazon Registry works with a partner EnCirca to register our domains so we’ll select them and optionally grab Site Builder. I know how to sling some HTML and Javascript together so I’ll pass on the Site Builder side of things.
After I click continue we’re taken to EnCirca’s website to finalize the registration and with any luck within a few minutes of purchasing and completing the registration we should receive an email with some good news:
Alright, now that we have a domain name let’s find out how to host things on it.
Using Amazon Route53 with a .BOT domain
Amazon Route 53 is a highly available and scalable DNS with robust APIs, healthchecks, service discovery, and many other features. I definitely want to use this to host my new domain. The first thing I’ll do is navigate to the Route53 console and create a hosted zone with the same name as my domain.
Great! Now, I need to take the Name Server (NS) records that Route53 created for me and use EnCirca’s portal to add these as the authoritative nameservers on the domain.
Now I just add my records to my hosted zone and I should be able to serve traffic! Way cool, I’ve got my very own .bot domain for @WhereML.
Next Steps
I could and should add to the security of my site by creating TLS certificates for people who intend to access my domain over TLS. Luckily with AWS Certificate Manager (ACM) this is extremely straightforward and I’ve got my subdomains and root domain verified in just a few clicks.
I could create a cloudfront distrobution to front an S3 static single page application to host my entire chatbot and invoke Amazon Lex with a cognito identity right from the browser.
Today we’re releasing a new machine learning feature in Amazon Kinesis Data Analytics for detecting “hotspots” in your streaming data. We launched Kinesis Data Analytics in August of 2016 and we’ve continued to add features since. As you may already know, Kinesis Data Analytics is a fully managed real-time processing engine for streaming data that lets you write SQL queries to derive meaning from your data and output the results to Kinesis Data Firehose, Kinesis Data Streams, or even an AWS Lambda function. The new HOTSPOT function adds to the existing machine learning capabilities in Kinesis that allow customers to leverage unsupervised streaming based machine learning algorithms. Customers don’t need to be experts in data science or machine learning to take advantage of these capabilities.
Hotspots
The HOTSPOTS function is a new Kinesis Data Analytics SQL function you can use to idenitfy relatively dense regions in your data without having to explicity build and train complicated machine learning models. You can identify subsections of your data that need immediate attention and take action programatically by streaming the hotspots out to a Kinesis Data stream, to a Firehose delivery stream, or by invoking a AWS Lambda function.
There are a ton of really cool scenarios where this could make your operations easier. Imagine a ride-share program or autonomous vehicle fleet communicating spatiotemporal data about traffic jams and congestion, or a datacenter where a number of servers start to overheat indicating an HVAC issue. HOTSPOTS is not limited to spatiotemporal data and you could apply it across many problem domains.
The function follows some simple syntax and accepts the DOUBLE, INTEGER, FLOAT, TINYINT, SMALLINT, REAL, and BIGINT data types.
The HOTSPOT function takes a cursor as input and returns a JSON string describing the hotspot. This will be easier to understand with an example.
Using Kinesis Data Analytics to Detect Hotspots
Let’s take a simple data set from NY Taxi and Limousine Commission that tracks yellow cab pickup and dropoff locations. Most of this data is already on S3 and publicly accessible at s3://nyc-tlc/. We will create a small python script to load our Kinesis Data Stream with Taxi records which will feed our Kinesis Data Analytics. Finally we’ll output all of this to a Kinesis Data Firehose connected to an Amazon Elasticsearch Service cluster for visualization with Kibana. I know from living in New York for 5 years that we’ll probably find a hotspot or two in this data.
First, we’ll create an input Kinesis stream and start sending our NYC Taxi Ride data into it. I just wrote a quick python script to read from one of the CSV files and used boto3 to push the records into Kinesis. You can put the record in whatever way works for you.
import csv
import json
import boto3
def chunkit(l, n):
"""Yield successive n-sized chunks from l."""
for i in range(0, len(l), n):
yield l[i:i + n]
kinesis = boto3.client("kinesis")
with open("taxidata2.csv") as f:
reader = csv.DictReader(f)
records = chunkit([{"PartitionKey": "taxis", "Data": json.dumps(row)} for row in reader], 500)
for chunk in records:
kinesis.put_records(StreamName="TaxiData", Records=chunk)
Next, we’ll create the Kinesis Data Analytics application and add our input stream with our taxi data as the source.
Next we’ll automatically detect the schema.
Now we’ll create a quick SQL Script to detect our hotspots and add that to the Real Time Analytics section of our application.
CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
"pickup_longitude" DOUBLE,
"pickup_latitude" DOUBLE,
HOTSPOTS_RESULT VARCHAR(10000)
);
CREATE OR REPLACE PUMP "STREAM_PUMP" AS INSERT INTO "DESTINATION_SQL_STREAM"
SELECT "pickup_longitude", "pickup_latitude", "HOTSPOTS_RESULT" FROM
TABLE(HOTSPOTS(
CURSOR(SELECT STREAM * FROM "SOURCE_SQL_STREAM_001"),
1000,
0.013,
20
)
);
Our HOTSPOTS function takes an input stream, a window size, scan radius, and a minimum number of points to count as a hotspot. The values for these are application dependent but you can tinker with them in the console easily until you get the results you want. There are more details about the parameters themselves in the documentation. The HOTSPOTS_RESULT returns some useful JSON that would let us plot bounding boxes around our hotspots:
When we have our desired results we can save the script and connect our application to our Amazon Elastic Search Service Firehose Delivery Stream. We can run an intermediate lambda function in the firehose to transform our record into a format more suitable for geographic work. Then we can update our mapping in Elasticsearch to index the hotspot objects as Geo-Shapes.
Finally, we can connect to Kibana and visualize the results.
Looks like Manhattan is pretty busy!
Available Now This feature is available now in all existing regions with Kinesis Data Analytics. I think this is a really interesting new feature of Kinesis Data Analytics that can bring immediate value to many applications. Let us know what you build with it on Twitter or in the comments!
I’ve previously explained how to setup Cassandra in AWS. The described setup works, but in some cases it may not be sufficient. E.g. it doesn’t give you an easy way to make and restore backups, and adding new nodes relies on a custom python script that randomly selects a seed.
My main reason for setting it up is the backup/restore functionality that it offers. All other ways to do backups are very tedious, and Priam happens to have implemented the important bits – the snapshotting and the incremental backups.
Priam is a bit tricky to get running, though. The setup guide is not too detailed and not easy to find (it’s the last, not immediately visible item in the wiki). First, it has one branch per Cassandra version, so you have to checkout the proper branch and build it. I immediately hit an issue there, as their naming doesn’t allow eclipse to import the gradle project. Within 24 hours I reported 3 issues, which isn’t ideal. Priam doesn’t support dynamic SimpleDB names, and doesn’t let you override bundled properties via the command line. I hope there aren’t bigger issues. The ones that I encountered, I fixed and made a pull request.
What does the setup look like?
Append a javaagent to the JVM options
Run the Priam web
It automatically replaces most of cassandra.yaml, including the seed provider (i.e. how does the node find other nodes in the cluster)
Run Cassandra
It fetches seed information (which is stored in AWS SimpleDB) and connects to a cluster
I decided to run the war file with a standalone jetty runner, rather than installing tomcat. In terms of shell scripts, the core bits look like that (in addition to the shell script in the original post that is run on initialization of the node):
# Get the Priam war file and jar file
aws s3 cp s3://$BUCKET_NAME/priam-web-3.12.0-SNAPSHOT.war ~/
aws s3 cp s3://$BUCKET_NAME/priam-cass-extensions-3.12.0-SNAPSHOT.jar /usr/share/cassandra/lib/priam-cass-extensions.jar
# Set the Priam agent
echo "-javaagent:/usr/share/cassandra/lib/priam-cass-extensions.jar" >> /etc/cassandra/conf/jvm.options
# Download jetty-runner to be able to run the Priam war file from the command line
wget http://central.maven.org/maven2/org/eclipse/jetty/jetty-runner/9.4.8.v20171121/jetty-runner-9.4.8.v20171121.jar
nohup java -Dpriam.clustername=LogSentinelCluster -Dpriam.sdb.instanceIdentity.region=$EC2_REGION -Dpriam.s3.bucket=$BACKUP_BUCKET \
-Dpriam.sdb.instanceidentity.domain=$INSTANCE_IDENTITY_DOMAIN -Dpriam.sdb.properties.domain=$PROPERTIES_DOMAIN \
-Dpriam.client.sslEnabled=true -Dpriam.internodeEncryption=all -Dpriam.rpc.server.type=sync \
-Dpriam.partitioner=org.apache.cassandra.dht.Murmur3Partitioner -Dpriam.backup.retention.days=7 \
-Dpriam.backup.hour=$BACKUP_HOUR -Dpriam.vnodes.numTokens=256 -Dpriam.thrift.enabled=false \
-jar jetty-runner-9.4.8.v20171121.jar --path /Priam ~/priam-web-3.12.0-SNAPSHOT.war &
while ! echo exit | nc $BIND_IP 8080; do sleep 10; done
echo "Started Priam web package"
service cassandra start
chkconfig cassandra on
while ! echo exit | nc $BIND_IP 9042; do sleep 10; done
BACKUP_BUCKET, PROPERTIES_DOMAIN and INSTANCE_DOMAIN are supplied via a CloudFormation script (as we can’t know the exact names in advance – especially for SimpleDB). Note that these properties won’t work in the main repo – I added them in my pull request.
In order for that to work, you need to have the two SimpleDB domains created (e.g. by CloudFormation). It is possible that you could replace SimpleDB with some other data storage (and not rely on AWS), but that’s out of scope for now.
The result of running Priam would be that you have your Cassandra nodes in SimpleDB (you can browse it using this chrome extension as AWS doesn’t offer any UI) and, of course, backups will be automatically created in the backup S3 Bucket.
You can then restore a backup by logging to each node and executing:
You specify the time range for the restore. Still not ideal, as one would hope to have a one-click restore, but much better than rolling out your own backup & restore infrastructure.
One very important note here – vnodes are not supported. My original cluster had a default of 256 vnodes per machine and now it has just 1, because Priam doesn’t support anything other than 1. That’s a pity, since vnodes are the recommended way to setup Cassandra. Apparently Netflix don’t use those, however. There’s a work-in-progress branch for that that was abandoned 5 years ago. Fortunately, there’s a fresh pull request with Vnode support that can be used in conjunction with my pull request from this branch.
Priam replaces some Cassandra defaults with other values so you might want to compare your current setup and the newly generated cassandra.yaml. Overall it doesn’t feel super-production ready, but apparently it is, as Netflix is using it in production.
Let’s Encrypt has announced that ACMEv2 (Automated Certificate Management Environment) and wildcard certificate support is live. ACMEv2 is an updated version of the ACME protocol that has gone through the IETF standards process. Wildcard certificates allow you to secure all subdomains of a domain with a single certificate. (Thanks to Alphonse Ogulla)
We’re relentlessly innovating on your behalf at AWS, especially when it comes to security. Last November, we launched Amazon GuardDuty, a continuous security monitoring and threat detection service that incorporates threat intelligence, anomaly detection, and machine learning to help protect your AWS resources, including your AWS accounts. Many large customers, including General Electric, Autodesk, and MapBox, discovered these benefits and have quickly adopted the service for its ease of use and improved threat detection. In this post, I want to show you how easy it is for everyone to get started—large and small—and discuss our rapid iteration on the service.
After more than seven years at AWS, I still find myself staying up at night obsessing about unnecessary complexity. Sounds fun, right? Well, I don’t have to tell you that there’s a lot of unnecessary complexity and undifferentiated heavy lifting in security. Most security tooling requires significant care and feeding by humans. It’s often difficult to configure and manage, it’s hard to know if it’s working properly, and it’s costly to procure and run. As a result, it’s not accessible to all customers, and for those that do get their hands on it, they spend a lot of highly-skilled resources trying to keep it operating at its potential.
Even for the most skilled security teams, it can be a struggle to ensure that all resources are covered, especially in the age of virtualization, where new accounts, new resources, and new users can come and go across your organization at a rapid pace. Furthermore, attackers have come up with ingenious ways of giving you the impression your security solution is working when, in fact, it has been completely disabled.
I’ve spent a lot of time obsessing about these problems. How can we use the Cloud to not just innovate in security, but also make it easier, more affordable, and more accessible to all? Our ultimate goal is to help you better protect your AWS resources, while also freeing you up to focus on the next big project.
With GuardDuty, we really turned the screws on unnecessary complexity, distilling continuous security monitoring and threat detection down to a binary decision—it’s either on or off. That’s it. There’s no software, virtual appliances, or agents to deploy, no data sources to enable, and no complex permissions to create. You don’t have to write custom rules or become an expert at machine learning. All we ask of you is to simply turn the service on with a single-click or API call.
GuardDuty operates completely on our infrastructure, so there’s no risk of disrupting your workloads. By providing a hard hypervisor boundary between the code running in your AWS accounts and the code running in GuardDuty, we can help ensure full coverage while making it harder for a misconfiguration or an ingenious attacker to change that. When we detect something interesting, we generate a security finding and deliver it to you through the GuardDuty console and AWS CloudWatch Events. This makes it possible to simply view findings in GuardDuty or push them to an existing SIEM or workflow system. We’ve already seen customers take it a step further using AWS Lambda to automate actions such as changing security groups, isolating instances, or rotating credentials.
Now… are you ready to get started? It’s this simple:
So, you’ve got it enabled, now what can GuardDuty detect?
As soon as you enable the service, it immediately starts consuming multiple metadata streams at scale, including AWS CloudTrail, VPC Flow Logs, and DNS logs. It compares what it finds to fully managed threat intelligence feeds containing the latest malicious IPs and domains. In parallel, GuardDuty profiles all activity in your account, which allows it to learn the behavior of your resources so it can identify highly suspicious activity that suggests a threat.
The threat-intelligence-based detections can identify activity such as an EC2 instance being probed or brute-forced by an attacker. If an instance is compromised, it can detect attempts at lateral movement, communication with a known malware or command-and-control server, crypto-currency mining, or an attempt to exfiltrate data through DNS.
Where it gets more interesting is the ability to detect AWS account-focused threats. For example, if an attacker gets a hold of your AWS account credentials—say, one of your developers exposes credentials on GitHub—GuardDuty will identify unusual account behavior. For example, an unusual instance type being deployed in a region that has never been used, suspicious attempts to inventory your resources by calling unusual patterns of list APIs or describe APIs, or an effort to obscure user activity by disabling CloudTrail logging.
Our obsession with removing complexity meant making these detections fully-managed. We take on all the heavy lifting of building, maintaining, measuring, and improving the detections so that you can focus on what to do when an event does occur.
What’s new?
When we launched at the end of November, we had thirty-four distinct detections in GuardDuty, but we weren’t stopping there. Many of these detections are already on their second or third continuous improvement iteration. In less than three months, we’ve also added twelve more, including nine CloudTrail-based anomaly detections that identify highly suspicious activity in your accounts. These new detections intelligently catch changes to, or reconnaissance of, network, resource, user permissions, and anomalous activity in EC2, CloudTrail, and AWS console log-ins. These are detections we’ve built based on what we’ve learned from observed attack patterns across the scale of AWS.
The intelligence in these detections is built around the identification of highly sensitive AWS API calls that are invoked under one or more highly suspicious circumstances. The combination of “highly sensitive” and “highly suspicious” is important. Highly sensitive APIs are those that either change the security posture of an account by adding or elevating users, user policies, roles, or account-key IDs (AKIDs). Highly suspicious circumstances are determined from underlying models profiled at the API level by GuardDuty. The result is the ability to catch real threats, while decreasing false positives, limiting false negatives, and reducing alert-noise.
It’s still day one
As we like to say in Amazon, it’s still day one. I’m excited about what we’ve built with GuardDuty, but we’re not going to stop improving, even if you’re already happy with what we’ve built. Check out the list of new detections below and all of the GuardDuty detections in our online documentation. Keep the feedback coming as it’s what powers us at AWS.
Now, I have to stop writing because my wife tells me I have some unnecessary complexity to remove from our closet.
New GuardDuty CloudTrail-based anomaly detections
Recon:IAMUser/NetworkPermissions Situation: An IAM user invoked an API commonly used to discover the network access permissions of existing security groups, ACLs, and routes in your AWS account. Description: This finding is triggered when network configuration settings in your AWS environment are probed under suspicious circumstances. For example, if an IAM user in your AWS environment invoked the DescribeSecurityGroups API with no prior history of doing so. An attacker might use stolen credentials to perform this reconnaissance of network configuration settings before executing the next stage of their attack, which might include changing network permissions or making use of existing openings in the network configuration.
Recon:IAMUser/ResourcePermissions Situation: An IAM user invoked an API commonly used to discover the permissions associated with various resources in your AWS account. Description: This finding is triggered when resource access permissions in your AWS account are probed under suspicious circumstances. For example, if an IAM user with no prior history of doing so, invoked the DescribeInstances API. An attacker might use stolen credentials to perform this reconnaissance of your AWS resources in order to find valuable information or determine the capabilities of the credentials they already have.
Recon:IAMUser/UserPermissions Situation: An IAM user invoked an API commonly used to discover the users, groups, policies, and permissions in your AWS account. Description: This finding is triggered when user permissions in your AWS environment are probed under suspicious circumstances. For example, if an IAM user invoked the ListInstanceProfilesForRole API with no prior history of doing so. An attacker might use stolen credentials to perform this reconnaissance of your IAM users and roles to determine the capabilities of the credentials they already have or to find more permissive credentials that are vulnerable to lateral movement.
Persistence:IAMUser/NetworkPermissions Situation: An IAM user invoked an API commonly used to change the network access permissions for security groups, routes, and ACLs in your AWS account. Description: This finding is triggered when network configuration settings are changed under suspicious circumstances. For example, if an IAM user in your AWS environment invoked the CreateSecurityGroup API with no prior history of doing so. Attackers often attempt to change security groups, allowing certain inbound traffic on various ports to improve their ability to access the bot they might have planted on your EC2 instance.
Persistence:IAMUser/ResourcePermissions Situation: An IAM user invoked an API commonly used to change the security access policies of various resources in your AWS account. Description: This finding is triggered when a change is detected to policies or permissions attached to AWS resources. For example, if an IAM user in your AWS environment invoked the PutBucketPolicy API with no prior history of doing so. Some services, such as Amazon S3, support resource-attached permissions that grant one or more IAM principals access to the resource. With stolen credentials, attackers can change the policies attached to a resource, granting themselves future access to that resource.
Persistence:IAMUser/UserPermissions Situation: An IAM user invoked an API commonly used to add, modify, or delete IAM users, groups, or policies in your AWS account. Description: This finding is triggered by suspicious changes to the user-related permissions in your AWS environment. For example, if an IAM user in your AWS environment invoked the AttachUserPolicy API with no prior history of doing so. In an effort to maximize their ability to access the account even after they’ve been discovered, attackers can use stolen credentials to create new users, add access policies to existing users, create access keys, and so on. The owner of the account might notice that a particular IAM user or password was stolen and delete it from the account, but might not delete other users that were created by the fraudulently created admin IAM user, leaving their AWS account still accessible to the attacker.
ResourceConsumption:IAMUser/ComputeResources Situation: An IAM user invoked an API commonly used to launch compute resources like EC2 Instances. Description: This finding is triggered when EC2 instances in your AWS environment are launched under suspicious circumstances. For example, if an IAM user invoked the RunInstances API with no prior history of doing so. This might be an indication of an attacker using stolen credentials to access compute time (possibly for cryptocurrency mining or password cracking). It can also be an indication of an attacker using an EC2 instance in your AWS environment and its credentials to maintain access to your account.
Stealth:IAMUser/LoggingConfigurationModified Situation: An IAM user invoked an API commonly used to stop CloudTrail logging, delete existing logs, and otherwise eliminate traces of activity in your AWS account. Description: This finding is triggered when the logging configuration in your AWS account is modified under suspicious circumstances. For example, if an IAM user invoked the StopLogging API with no prior history of doing so. This can be an indication of an attacker trying to cover their tracks by eliminating any trace of their activity.
UnauthorizedAccess:IAMUser/ConsoleLogin Situation: An unusual console login by an IAM user in your AWS account was observed. Description: This finding is triggered when a console login is detected under suspicious circumstances. For example, if an IAM user invoked the ConsoleLogin API from a never-before- used client or an unusual location. This could be an indication of stolen credentials being used to gain access to your AWS account, or a valid user accessing the account in an invalid or less secure manner (for example, not over an approved VPN).
New GuardDuty threat intelligence based detections
Trojan:EC2/PhishingDomainRequest!DNS This detection occurs when an EC2 instance queries domains involved in phishing attacks.
Trojan:EC2/BlackholeTraffic!DNS This detection occurs when an EC2 instance connects to a black hole domain. Black holes refer to places in the network where incoming or outgoing traffic is silently discarded without informing the source that the data didn’t reach its intended recipient.
Trojan:EC2/DGADomainRequest.C!DNS This detection occurs when an EC2 instance queries algorithmically generated domains. Such domains are commonly used by malware and could be an indication of a compromised EC2 instance.
If you have feedback about this blog post, submit comments in the “Comments” section below. If you have questions about this blog post, start a new thread on the Amazon GuardDuty forum or contact AWS Support.
You can now enable your on-premises users administer your AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD. Using an Active Directory (AD) trust and the new AWS delegated AD security groups, you can grant administrative permissions to your on-premises users by managing group membership in your on-premises AD directory. This simplifies how you manage who can perform administration. It also makes it easier for your administrators because they can sign in to their existing workstation with their on-premises AD credential to administer your AWS Managed Microsoft AD.
AWS created new domain local AD security groups (AWS delegated groups) in your AWS Managed Microsoft AD directory. Each AWS delegated group has unique AD administrative permissions. Users that are members in the new AWS delegated groups get permissions to perform administrative tasks, such as add users, configure fine-grained password policies and enable Microsoft enterprise Certificate Authority. Because the AWS delegated groups are domain local in scope, you can use them through an AD Trust to your on-premises AD. This eliminates the requirement to create and use separate identities to administer your AWS Managed Microsoft AD. Instead, by adding selected on-premises users to desired AWS delegated groups, you can grant your administrators some or all of the permissions. You can simplify this even further by adding on-premises AD security groups to the AWS delegated groups. This enables you to add and remove users from your on-premises AD security group so that they can manage administrative permissions in your AWS Managed Microsoft AD.
In this blog post, I will show you how to delegate permissions to your on-premises users to perform an administrative task–configuring fine-grained password policies–in your AWS Managed Microsoft AD directory. You can follow the steps in this post to delegate other administrative permissions, such as configuring group Managed Service Accounts and Kerberos constrained delegation, to your on-premises users.
Background
Until now, AWS Managed Microsoft AD delegated administrative permissions for your directory by creating AD security groups in your Organization Unit (OU) and authorizing these AWS delegated groups for common administrative activities. The admin user in your directory created user accounts within your OU, and granted these users permissions to administer your directory by adding them to one or more of these AWS delegated groups.
However, if you used your AWS Managed Microsoft AD with a trust to an on-premises AD forest, you couldn’t add users from your on-premises directory to these AWS delegated groups. This is because AWS created the AWS delegated groups with global scope, which restricts adding users from another forest. This necessitated that you create different user accounts in AWS Managed Microsoft AD for the purpose of administration. As a result, AD administrators typically had to remember additional credentials for AWS Managed Microsoft AD.
To address this, AWS created new AWS delegated groups with domain local scope in a separate OU called AWS Delegated Groups. These new AWS delegated groups with domain local scope are more flexible and permit adding users and groups from other domains and forests. This allows your admin user to delegate your on-premises users and groups administrative permissions to your AWS Managed Microsoft AD directory.
Note: If you already have an existing AWS Managed Microsoft AD directory containing the original AWS delegated groups with global scope, AWS preserved the original AWS delegated groups in the event you are currently using them with identities in AWS Managed Microsoft AD. AWS recommends that you transition to use the new AWS delegated groups with domain local scope. All newly created AWS Managed Microsoft AD directories have the new AWS delegated groups with domain local scope only.
Now, I will show you the steps to delegate administrative permissions to your on-premises users and groups to configure fine-grained password policies in your AWS Managed Microsoft AD directory.
Prerequisites
For this post, I assume you are familiar with AD security groups and how security group scope rules work. I also assume you are familiar with AD trusts.
The instructions in this blog post require you to have the following components running:
An active AWS Managed Microsoft AD directory. To create a directory, follow the steps in Creating an AWS Managed Microsoft AD directory. You also need to know the password for the admin account so that you can add other users and groups to the AWS created AD security groups in the AWS Managed Microsoft AD directory.
An existing on-premises AD directory. Your on-premises AD directory must contain a user that you want to delegate permissions to manage your AWS Managed Microsoft AD directory.
A machine joined to your on-premises AD directory with ADUC installed. You can install ADUC by installing Active Directory Administrative Tools on a Windows computer that you joined to your on-premises AD domain.
Solution overview
I will now show you how to manage which on-premises users have delegated permissions to administer your directory by efficiently using on-premises AD security groups to manage these permissions. I will do this by:
Adding on-premises groups to an AWS delegated group. In this step, you sign in to management instance connected to AWS Managed Microsoft AD directory as admin user and add on-premises groups to AWS delegated groups.
Administer your AWS Managed Microsoft AD directory as on-premises user. In this step, you sign in to a workstation connected to your on-premises AD using your on-premises credentials and administer your AWS Managed Microsoft AD directory.
For the purpose of this blog, I already have an on-premises AD directory (in this case, on-premises.com). I also created an AWS Managed Microsoft AD directory (in this case, corp.example.com) that I use with Amazon RDS for SQL Server. To enable Integrated Windows Authentication to my on-premises.com domain, I established a one-way outgoing trust from my AWS Managed Microsoft AD directory to my on-premises AD directory. To administer my AWS Managed Microsoft AD, I created an Amazon EC2 for Windows Server instance (in this case, Cloud Management). I also have an on-premises workstation (in this case, On-premises Management), that is connected to my on-premises AD directory.
The following diagram represents the relationships between the on-premises AD and the AWS Managed Microsoft AD directory.
The left side represents the AWS Cloud containing AWS Managed Microsoft AD directory. I connected the directory to the on-premises AD directory via a 1-way forest trust relationship. When AWS created my AWS Managed Microsoft AD directory, AWS created a group called AWS Delegated Fine Grained Password Policy Administrators that has permissions to configure fine-grained password policies in AWS Managed Microsoft AD.
The right side of the diagram represents the on-premises AD directory. I created a global AD security group called On-premises fine grained password policy admins and I configured it so all members can manage fine grained password policies in my on-premises AD. I have two administrators in my company, John and Richard, who I added as members of On-premises fine grained password policy admins. I want to enable John and Richard to also manage fine grained password policies in my AWS Managed Microsoft AD.
While I could add John and Richard to the AWS Delegated Fine Grained Password Policy Administrators individually, I want a more efficient way to delegate and remove permissions for on-premises users to manage fine grained password policies in my AWS Managed Microsoft AD. In fact, I want to assign permissions to the same people that manage password policies in my on-premises directory.
To do this, I will:
As admin user, add the On-premises fine grained password policy admins as member of the AWS Delegated Fine Grained Password Policy Administrators security group from my Cloud Management machine.
Manage who can administer password policies in my AWS Managed Microsoft AD directory by adding and removing users as members of the On-premises fine grained password policy admins. Doing so enables me to perform all my delegation work in my on-premises directory without the need to use a remote desktop protocol (RDP) session to my Cloud Management instance. In this case, Richard, who is a member of On-premises fine grained password policy admins group can now administer AWS Managed Microsoft AD directory from On-premises Management workstation.
Although I’m showing a specific case using fine grained password policy delegation, you can do this with any of the new AWS delegated groups and your on-premises groups and users.
Let’s get started.
Step 1 – Add on-premises groups to AWS delegated groups
In this step, open an RDP session to the Cloud Management instance and sign in as the admin user in your AWS Managed Microsoft AD directory. Then, add your users and groups from your on-premises AD to AWS delegated groups in AWS Managed Microsoft AD directory. In this example, I do the following:
Sign in to the Cloud Management instance with the user name admin and the password that you set for the admin user when you created your directory.
Open the Microsoft Windows Server Manager and navigate to Tools > Active Directory Users and Computers.
Switch to the tree view and navigate to corp.example.com > AWS Delegated Groups. Right-click AWS Delegated Fine Grained Password Policy Administrators and select Properties.
In the AWS Delegated Fine Grained Password Policy window, switch to Members tab and choose Add.
In the Select Users, Contacts, Computers, Service Accounts, or Groups window, choose Locations.
In the Locations window, select on-premises.com domain and choose OK.
In the Enter the object names to select box, enter on-premises fine grained password policy admins and choose Check Names.
Because I have a 1-way trust from AWS Managed Microsoft AD to my on-premises AD, Windows prompts me to enter credentials for an on-premises user account that has permissions to complete the search. If I had a 2-way trust and the admin account in my AWS Managed Microsoft AD has permissions to read my on-premises directory, Windows will not prompt me.In the Windows Security window, enter the credentials for an account with permissions for on-premises.com and choose OK.
Click OK to add On-premises fine grained password policy admins group as a member of the AWS Delegated Fine Grained Password Policy Administrators group in your AWS Managed Microsoft AD directory.
At this point, any user that is a member of On-premises fine grained password policy admins group has permissions to manage password policies in your AWS Managed Microsoft AD directory.
Step 2 – Administer your AWS Managed Microsoft AD as on-premises user
Any member of the on-premises group(s) that you added to an AWS delegated group inherited the permissions of the AWS delegated group.
In this example, Richard signs in to the On-premises Management instance. Because Richard inherited permissions from Delegated Fine Grained Password Policy Administrators, he can now administer fine grained password policies in the AWS Managed Microsoft AD directory using on-premises credentials.
Sign in to the On-premises Management instance as Richard.
Open the Microsoft Windows Server Manager and navigate to Tools > Active Directory Users and Computers.
Switch to the tree view, right-click Active Directory Users and Computers, and then select Change Domain.
In the Change Domain window, enter corp.example.com, and then choose OK.
You’ll be connected to your AWS Managed Microsoft AD domain:
Richard can now administer the password policies. Because John is also a member of the AWS delegated group, John can also perform password policy administration the same way.
In future, if Richard moves to another division within the company and you hire Judy as a replacement for Richard, you can simply remove Richard from On-premises fine grained password policy admins group and add Judy to this group. Richard will no longer have administrative permissions, while Judy can now administer password policies for your AWS Managed Microsoft AD directory.
Summary
We’ve tried to make it easier for you to administer your AWS Managed Microsoft AD directory by creating AWS delegated groups with domain local scope. You can add your on-premises AD groups to the AWS delegated groups. You can then control who can administer your directory by managing group membership in your on-premises AD directory. Your administrators can sign in to their existing on-premises workstations using their on-premises credentials and administer your AWS Managed Microsoft AD directory. I encourage you to explore the new AWS delegated security groups by using Active Directory Users and Computers from the management instance for your AWS Managed Microsoft AD. To learn more about AWS Directory Service, see the AWS Directory Service home page. If you have questions, please post them on the Directory Service forum. If you have comments about this post, submit them in the “Comments” section below.
A customer has been successfully creating and running multiple Amazon Elasticsearch Service (Amazon ES) domains to support their business users’ search needs across products, orders, support documentation, and a growing suite of similar needs. The service has become heavily used across the organization. This led to some domains running at 100% capacity during peak times, while others began to run low on storage space. Because of this increased usage, the technical teams were in danger of missing their service level agreements. They contacted me for help.
This post shows how you can set up automated alarms to warn when domains need attention.
Solution overview
Amazon ES is a fully managed service that delivers Elasticsearch’s easy-to-use APIs and real-time analytics capabilities along with the availability, scalability, and security that production workloads require. The service offers built-in integrations with a number of other components and AWS services, enabling customers to go from raw data to actionable insights quickly and securely.
One of these other integrated services is Amazon CloudWatch. CloudWatch is a monitoring service for AWS Cloud resources and the applications that you run on AWS. You can use CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources.
CloudWatch collects metrics for Amazon ES. You can use these metrics to monitor the state of your Amazon ES domains, and set alarms to notify you about high utilization of system resources. For more information, see Amazon Elasticsearch Service Metrics and Dimensions.
While the metrics are automatically collected, the missing piece is how to set alarms on these metrics at appropriate levels for each of your domains. This post includes sample Python code to evaluate the current state of your Amazon ES environment, and to set up alarms according to AWS recommendations and best practices.
There are two components to the sample solution:
es-check-cwalarms.py: This Python script checks the CloudWatch alarms that have been set, for all Amazon ES domains in a given account and region.
es-create-cwalarms.py: This Python script sets up a set of CloudWatch alarms for a single given domain.
The sample code can also be found in the amazon-es-check-cw-alarms GitHub repo. The scripts are easy to extend or combine, as described in the section “Extensions and Adaptations”.
Assessing the current state
The first script, es-check-cwalarms.py, is used to give an overview of the configurations and alarm settings for all the Amazon ES domains in the given region. The script takes the following parameters:
python es-checkcwalarms.py -h
usage: es-checkcwalarms.py [-h] [-e ESPREFIX] [-n NOTIFY] [-f FREE][-p PROFILE] [-r REGION]
Checks a set of recommended CloudWatch alarms for Amazon Elasticsearch Service domains (optionally, those beginning with a given prefix).
optional arguments:
-h, --help show this help message and exit
-e ESPREFIX, --esprefix ESPREFIX Only check Amazon Elasticsearch Service domains that begin with this prefix.
-n NOTIFY, --notify NOTIFY List of CloudWatch alarm actions; e.g. ['arn:aws:sns:xxxx']
-f FREE, --free FREE Minimum free storage (MB) on which to alarm
-p PROFILE, --profile PROFILE IAM profile name to use
-r REGION, --region REGION AWS region for the domain. Default: us-east-1
The script first identifies all the domains in the given region (or, optionally, limits them to the subset that begins with a given prefix). It then starts running a set of checks against each one.
The script can be run from the command line or set up as a scheduled Lambda function. For example, for one customer, it was deemed appropriate to regularly run the script to check that alarms were correctly set for all domains. In addition, because configuration changes—cluster size increases to accommodate larger workloads being a common change—might require updates to alarms, this approach allowed the automatic identification of alarms no longer appropriately set as the domain configurations changed.
The output shown below is the output for one domain in my account.
Starting checks for Elasticsearch domain iotfleet , version is 53
Iotfleet Automated snapshot hour (UTC): 0
Iotfleet Instance configuration: 1 instances; type:m3.medium.elasticsearch
Iotfleet Instance storage definition is: 4 GB; free storage calced to: 819.2 MB
iotfleet Desired free storage set to (in MB): 819.2
iotfleet WARNING: Not using VPC Endpoint
iotfleet WARNING: Does not have Zone Awareness enabled
iotfleet WARNING: Instance count is ODD. Best practice is for an even number of data nodes and zone awareness.
iotfleet WARNING: Does not have Dedicated Masters.
iotfleet WARNING: Neither index nor search slow logs are enabled.
iotfleet WARNING: EBS not in use. Using instance storage only.
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-ClusterStatus.yellow-Alarm ClusterStatus.yellow
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-ClusterStatus.red-Alarm ClusterStatus.red
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-CPUUtilization-Alarm CPUUtilization
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-JVMMemoryPressure-Alarm JVMMemoryPressure
iotfleet WARNING: Missing alarm!! ('ClusterIndexWritesBlocked', 'Maximum', 60, 5, 'GreaterThanOrEqualToThreshold', 1.0)
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-AutomatedSnapshotFailure-Alarm AutomatedSnapshotFailure
iotfleet Alarm: Threshold does not match: Test-Elasticsearch-iotfleet-FreeStorageSpace-Alarm Should be: 819.2 ; is 3000.0
The output messages fall into the following categories:
System overview, Informational: The Amazon ES version and configuration, including instance type and number, storage, automated snapshot hour, etc.
Free storage: A calculation for the appropriate amount of free storage, based on the recommended 20% of total storage.
Warnings: best practices that are not being followed for this domain. (For more about this, read on.)
Alarms: An assessment of the CloudWatch alarms currently set for this domain, against a recommended set.
The script contains an array of recommended CloudWatch alarms, based on best practices for these metrics and statistics. Using the array allows alarm parameters (such as free space) to be updated within the code based on current domain statistics and configurations.
For a given domain, the script checks if each alarm has been set. If the alarm is set, it checks whether the values match those in the array esAlarms. In the output above, you can see three different situations being reported:
Alarm ok; definition matches. The alarm set for the domain matches the settings in the array.
Alarm: Threshold does not match. An alarm exists, but the threshold value at which the alarm is triggered does not match.
WARNING: Missing alarm!! The recommended alarm is missing.
All in all, the list above shows that this domain does not have a configuration that adheres to best practices, nor does it have all the recommended alarms.
Setting up alarms
Now that you know that the domains in their current state are missing critical alarms, you can correct the situation.
To demonstrate the script, set up a new domain named “ver”, in us-west-2. Specify 1 node, and a 10-GB EBS disk. Also, create an SNS topic in us-west-2 with a name of “sendnotification”, which sends you an email.
Run the second script, es-create-cwalarms.py, from the command line. This script creates (or updates) the desired CloudWatch alarms for the specified Amazon ES domain, “ver”.
python es-create-cwalarms.py -r us-west-2 -e test -c ver -n "['arn:aws:sns:us-west-2:xxxxxxxxxx:sendnotification']"
EBS enabled: True type: gp2 size (GB): 10 No Iops 10240 total storage (MB)
Desired free storage set to (in MB): 2048.0
Creating Test-Elasticsearch-ver-ClusterStatus.yellow-Alarm
Creating Test-Elasticsearch-ver-ClusterStatus.red-Alarm
Creating Test-Elasticsearch-ver-CPUUtilization-Alarm
Creating Test-Elasticsearch-ver-JVMMemoryPressure-Alarm
Creating Test-Elasticsearch-ver-FreeStorageSpace-Alarm
Creating Test-Elasticsearch-ver-ClusterIndexWritesBlocked-Alarm
Creating Test-Elasticsearch-ver-AutomatedSnapshotFailure-Alarm
Successfully finished creating alarms!
As with the first script, this script contains an array of recommended CloudWatch alarms, based on best practices for these metrics and statistics. This approach allows you to add or modify alarms based on your use case (more on that below).
After running the script, navigate to Alarms on the CloudWatch console. You can see the set of alarms set up on your domain.
Because the “ver” domain has only a single node, cluster status is yellow, and that alarm is in an “ALARM” state. It’s already sent a notification that the alarm has been triggered.
In most cases, the alarm triggers due to an increased workload. The likely action is to reconfigure the system to handle the increased workload, rather than reducing the incoming workload. Reconfiguring any backend store—a category of systems that includes Elasticsearch—is best performed when the system is quiescent or lightly loaded. Reconfigurations such as setting zone awareness or modifying the disk type cause Amazon ES to enter a “processing” state, potentially disrupting client access.
Other changes, such as increasing the number of data nodes, may cause Elasticsearch to begin moving shards, potentially impacting search performance on these shards while this is happening. These actions should be considered in the context of your production usage. For the same reason I also do not recommend running a script that resets all domains to match best practices.
Avoid the need to reconfigure during heavy workload by setting alarms at a level that allows a considered approach to making the needed changes. For example, if you identify that each weekly peak is increasing, you can reconfigure during a weekly quiet period.
While Elasticsearch can be reconfigured without being quiesced, it is not a best practice to automatically scale it up and down based on usage patterns. Unlike some other AWS services, I recommend against setting a CloudWatch action that automatically reconfigures the system when alarms are triggered.
There are other situations where the planned reconfiguration approach may not work, such as low or zero free disk space causing the domain to reject writes. If the business is dependent on the domain continuing to accept incoming writes and deleting data is not an option, the team may choose to reconfigure immediately.
Extensions and adaptations
You may wish to modify the best practices encoded in the scripts for your own environment or workloads. It’s always better to avoid situations where alerts are generated but routinely ignored. All alerts should trigger a review and one or more actions, either immediately or at a planned date. The following is a list of common situations where you may wish to set different alarms for different domains:
Dev/test vs. production You may have a different set of configuration rules and alarms for your dev environment configurations than for test. For example, you may require zone awareness and dedicated masters for your production environment, but not for your development domains. Or, you may not have any alarms set in dev. For test environments that mirror your potential peak load, test to ensure that the alarms are appropriately triggered.
Differing workloads or SLAs for different domains You may have one domain with a requirement for superfast search performance, and another domain with a heavy ingest load that tolerates slower search response. Your reaction to slow response for these two workloads is likely to be different, so perhaps the thresholds for these two domains should be set at a different level. In this case, you might add a “max CPU utilization” alarm at 100% for 1 minute for the fast search domain, while the other domain only triggers an alarm when the average has been higher than 60% for 5 minutes. You might also add a “free space” rule with a higher threshold to reflect the need for more space for the heavy ingest load if there is danger that it could fill the available disk quickly.
“Normal” alarms versus “emergency” alarms If, for example, free disk space drops to 25% of total capacity, an alarm is triggered that indicates action should be taken as soon as possible, such as cleaning up old indexes or reconfiguring at the next quiet period for this domain. However, if free space drops below a critical level (20% free space), action must be taken immediately in order to prevent Amazon ES from setting the domain to read-only. Similarly, if the “ClusterIndexWritesBlocked” alarm triggers, the domain has already stopped accepting writes, so immediate action is needed. In this case, you may wish to set “laddered” alarms, where one threshold causes an alarm to be triggered to review the current workload for a planned reconfiguration, but a different threshold raises a “DefCon 3” alarm that immediate action is required.
The sample scripts provided here are a starting point, intended for you to adapt to your own environment and needs.
Running the scripts one time can identify how far your current state is from your desired state, and create an initial set of alarms. Regularly re-running these scripts can capture changes in your environment over time and adjusting your alarms for changes in your environment and configurations. One customer has set them up to run nightly, and to automatically create and update alarms to match their preferred settings.
Removing unwanted alarms
Each CloudWatch alarm costs approximately $0.10 per month. You can remove unwanted alarms in the CloudWatch console, under Alarms. If you set up a “ver” domain above, remember to remove it to avoid continuing charges.
Conclusion
Setting CloudWatch alarms appropriately for your Amazon ES domains can help you avoid suboptimal performance and allow you to respond to workload growth or configuration issues well before they become urgent. This post gives you a starting point for doing so. The additional sleep you’ll get knowing you don’t need to be concerned about Elasticsearch domain performance will allow you to focus on building creative solutions for your business and solving problems for your customers.
Dr. Veronika Megler is a senior consultant at Amazon Web Services. She works with our customers to implement innovative big data, AI and ML projects, helping them accelerate their time-to-value when using AWS.
Amazon EMR enables data analysts and scientists to deploy a cluster running popular frameworks such as Spark, HBase, Presto, and Flink of any size in minutes. When you launch a cluster, Amazon EMR automatically configures the underlying Amazon EC2 instances with the frameworks and applications that you choose for your cluster. This can include popular web interfaces such as Hue workbench, Zeppelin notebook, and Ganglia monitoring dashboards and tools.
These web interfaces are hosted on the EMR master node and must be accessed using the public DNS name of the master node (master public DNS value). The master public DNS value is dynamically created, not very user friendly and is hard to remember— it looks something like ip-###-###-###-###.us-west-2.compute.internal. Not having a friendly URL to connect to the popular workbench or notebook interfaces may impact the workflow and hinder your gained agility.
Some customers have addressed this challenge through custom bootstrap actions, steps, or external scripts that periodically check for new clusters and register a friendlier name in DNS. These approaches either put additional burden on the data practitioners or require additional resources to execute the scripts. In addition, there is typically some lag time associated with such scripts. They often don’t do a great job cleaning up the DNS records after the cluster has terminated, potentially resulting in a security risk.
The solution in this post provides an automated, serverless approach to registering a friendly master node name for easy access to the web interfaces.
Before I dive deeper, I review these key services and how they are part of this solution.
CloudWatch Events
CloudWatch Events delivers a near real-time stream of system events that describe changes in AWS resources. Using simple rules, you can match events and route them to one or more target functions or streams. An event can be generated in one of four ways:
From an AWS service when resources change state
From API calls that are delivered via AWS CloudTrail
From your own code that can generate application-level events
In this solution, I cover the first type of event, which is automatically emitted by EMR when the cluster state changes. Based on the state of this event, either create or update the DNS record in Route 53 when the cluster state changes to STARTING, or delete the DNS record when the cluster is no longer needed and the state changes to TERMINATED. For more information about all EMR event details, see Monitor CloudWatch Events.
Route 53 private hosted zones
A private hosted zone is a container that holds information about how to route traffic for a domain and its subdomains within one or more VPCs. Private hosted zones enable you to use custom DNS names for your internal resources without exposing the names or IP addresses to the internet.
Route 53 supports resource record sets with a wide range of record types. In this solution, you use a CNAME record that is used to specify a domain name as an alias for another domain (the ‘canonical’ domain). You use a friendly name of the cluster as the CNAME for the EMR master public DNS value.
You are using private hosted zones because an EMR cluster is typically deployed within a private subnet and is accessed either from within the VPC or from on-premises resources over VPN or AWS Direct Connect. To resolve domain names in private hosted zones from your on-premises network, configure a DNS forwarder, as described in How can I resolve Route 53 private hosted zones from an on-premises network via an Ubuntu instance?.
Lambda
Lambda is a compute service that lets you run code without provisioning or managing servers. Lambda executes your code only when needed and scales automatically to thousands of requests per second. Lambda takes care of high availability, and server and OS maintenance and patching. You pay only for the consumed compute time. There is no charge when your code is not running.
Lambda provides the ability to invoke your code in response to events, such as when an object is put to an Amazon S3 bucket or as in this case, when a CloudWatch event is emitted. As part of this solution, you deploy a Lambda function as a target that is invoked by CloudWatch Events when the event matches your rule. You also configure the necessary permissions based on the Lambda permissions model, including a Lambda function policy and Lambda execution role.
Putting it all together
Now that you have all of the pieces, you can put together a complete solution. The following diagram illustrates how the solution works:
Start with a user activity such as launching or terminating an EMR cluster.
EMR automatically sends events to the CloudWatch Events stream.
A CloudWatch Events rule matches the specified event, and routes it to a target, which in this case is a Lambda function. In this case, you are using the EMR Cluster State Change
The Lambda function performs the following key steps:
Get the clusterId value from the event detail and use it to call EMR. DescribeCluster API to retrieve the following data points:
MasterPublicDnsName – public DNS name of the master node
Locate the tag containing the friendly name to use as the CNAME for the cluster. The key name containing the friendly name should be The value should be specified as host.domain.com, where domain is the private hosted zone in which to update the DNS record.
Update DNS based on the state in the event detail.
If the state is STARTING, the function calls the Route 53 API to create or update a resource record set in the private hosted zone specified by the domain tag. This is a CNAME record mapped to MasterPublicDnsName.
Conversely, if the state is TERMINATED, the function calls the Route 53 API to delete the associated resource record set from the private hosted zone.
Deploying the solution
Because all of the components of this solution are serverless, use the AWS Serverless Application Model (AWS SAM) template to deploy the solution. AWS SAM is natively supported by AWS CloudFormation and provides a simplified syntax for expressing serverless resources, resulting in fewer lines of code.
Overview of the SAM template
For this solution, the SAM template has 76 lines of text as compared to 142 lines without SAM resources (and writing the template in YAML would be even slightly smaller). The solution can be deployed using the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS SAM Local.
CloudFormation transforms help simplify template authoring by condensing a multiple-line resource declaration into a single line in your template. To inform CloudFormation that your template defines a serverless application, add a line under the template format version as follows:
Before SAM, you would use the AWS::Lambda::Function resource type to define your Lambda function. You would then need a resource to define the permissions for the function (AWS::Lambda::Permission), another resource to define a Lambda execution role (AWS::IAM::Role), and finally a CloudWatch Events resource (Events::Rule) that triggers this function.
With SAM, you need to define just a single resource for your function, AWS::Serverless::Function. Using this single resource type, you can define everything that you need, including function properties such as function handler, runtime, and code URI, as well as the required IAM policies and the CloudWatch event.
A few additional things to note in the code example:
CodeUri – Before you can deploy a SAM template, first upload your Lambda function code zip to S3. You can do this manually or use the aws cloudformation package CLI command to automate the task of uploading local artifacts to a S3 bucket, as shown later.
Lambda execution role and permissions – You are not specifying a Lambda execution role in the template. Rather, you are providing the required permissions as IAM policy documents. When the template is submitted, CloudFormation expands the AWS::Serverless::Function resource, declaring a Lambda function and an execution role. The created role has two attached policies: a default AWSLambdaBasicExecutionRole and the inline policy specified in the template.
CloudWatch Events rule – Instead of specifying a CloudWatch Events resource type, you are defining an event source object as a property of the function itself. When the template is submitted, CloudFormation expands this into a CloudWatch Events rule resource and automatically creates the Lambda resource-based permissions to allow the CloudWatch Events rule to trigger the function.
NOTE: If you are trying this solution outside of us-east-1, then you should download the necessary files, upload them to the buckets in your region, edit the script as appropriate and then run it or use the CLI deployment method below.
3.) Choose Next.
4.) On the Specify Details page, keep or modify the stack name and choose Next.
5.) On the Options page, choose Next.
6.) On the Review page, take the following steps:
Acknowledge the two Transform access capabilities. This allows the CloudFormation transform to create the required IAM resources with custom names.
Under Transforms, choose Create Change Set.
Wait a few seconds for the change set to be created before proceeding. The change set should look as follows:
7.) Choose Execute to deploy the template.
After the template is deployed, you should see four resources created:
After the package is successfully uploaded, the output should look as follows:
Uploading to 0f6d12c7872b50b37dbfd5a60385b854 1872 / 1872.0 (100.00%)
Successfully packaged artifacts and wrote output template to file serverless-output.template.
The CodeUri property in serverless-output.template is now referencing the packaged artifacts in the S3 bucket that you specified:
s3://<bucket>/0f6d12c7872b50b37dbfd5a60385b854
Use the aws cloudformation deploy CLI command to deploy the stack:
You should see the following output after the stack has been successfully created:
Waiting for changeset to be created...
Waiting for stack create/update to complete
Successfully created/updated stack – EmrDnsSetterCli
Validating results
To test the solution, launch an EMR cluster. The Lambda function looks for the cluster_name tag associated with the EMR cluster. Make sure to specify the friendly name of your cluster as host.domain.com where the domain is the private hosted zone in which to create the CNAME record.
Here is a sample CLI command to launch a cluster within a specific subnet in a VPC with the required tag cluster_name.
After the cluster is launched, log in to the Route 53 console. In the left navigation pane, choose Hosted Zones to view the list of private and public zones currently configured in Route 53. Select the hosted zone that you specified in the ZONE tag when you launched the cluster. Verify that the resource records were created.
You can also monitor the CloudWatch Events metrics that are published to CloudWatch every minute, such as the number of TriggeredRules and Invocations.
Now that you’ve verified that the Lambda function successfully updated the Route 53 resource records in the zone file, terminate the EMR cluster and verify that the records are removed by the same function.
Conclusion
This solution provides a serverless approach to automatically assigning a friendly name for your EMR cluster for easy access to popular notebooks and other web interfaces. CloudWatch Events also supports cross-account event delivery, so if you are running EMR clusters in multiple AWS accounts, all cluster state events across accounts can be consolidated into a single account.
I hope that this solution provides a small glimpse into the power of CloudWatch Events and Lambda and how they can be leveraged with EMR and other AWS big data services. For example, by using the EMR step state change event, you can chain various pieces of your analytics pipeline. You may have a transient cluster perform data ingest and, when the task successfully completes, spin up an ETL cluster for transformation and upload to Amazon Redshift. The possibilities are truly endless.
Kaspersky Labs is reporting on a new piece of sophisticated malware:
We observed many web landing pages that mimic the sites of mobile operators and which are used to spread the Android implants. These domains have been registered by the attackers since 2015. According to our telemetry, that was the year the distribution campaign was at its most active. The activities continue: the most recently observed domain was registered on October 31, 2017. Based on our KSN statistics, there are several infected individuals, exclusively in Italy.
Moreover, as we dived deeper into the investigation, we discovered several spyware tools for Windows that form an implant for exfiltrating sensitive data on a targeted machine. The version we found was built at the beginning of 2017, and at the moment we are not sure whether this implant has been used in the wild.
It seems to be Italian. Ars Technica speculates that it is related to Hacking Team:
That’s not to say the malware is perfect. The various versions examined by Kaspersky Lab contained several artifacts that provide valuable clues about the people who may have developed and maintained the code. Traces include the domain name h3g.co, which was registered by Italian IT firm Negg International. Negg officials didn’t respond to an email requesting comment for this post. The malware may be filling a void left after the epic hack in 2015 of Hacking Team, another Italy-based developer of spyware.
The Linux kernel’s generic power domain (genpd) subsystem has been extended to support active state management of the power domains in the 4.15 development cycle. Power domains were traditionally used to enable or disable power to a region of a system on chip (SoC) but, with the recent updates, they can control the clock rate or amount of power supplied to that region as well. These changes improve the kernel’s ability to run the system’s hardware at the optimal power level for the current workload.
Click below (subscribers only) for the full article contributed by Viresh Kumar.
Many enterprises use Microsoft Active Directory to manage users, groups, and computers in a network. And a question is asked frequently: How can Active Directory users access big data workloads running on Amazon EMR with the same single sign-on (SSO) experience they have when accessing resources in the Active Directory network?
This post walks you through the process of using AWS CloudFormation to set up a cross-realm trust and extend authentication from an Active Directory network into an Amazon EMR cluster with Kerberos enabled. By establishing a cross-realm trust, Active Directory users can use their Active Directory credentials to access an Amazon EMR cluster and run jobs as themselves.
Walkthrough overview
In this example, you build a solution that allows Active Directory users to seamlessly access Amazon EMR clusters and run big data jobs. Here’s what you need before setting up this solution:
A possible limit increase for your account (Note: Usually a limit increase will not be necessary. See the AWS Service Limits documentation if you encounter a limit error while building the solution.)
To make it easier for you to get started, I created AWS CloudFormation templates that automatically configure and deploy the solution for you. The following steps and resources are involved in setting up the solution:
Note: If you want to manually create and configure the components for this solution without using AWS CloudFormation, refer to the Amazon EMR cross-realm documentation. IMPORTANT: The AWS CloudFormation templates used in this post are designed to work only in the us-east-1 (N. Virginia) Region. They are not intended for production use without modification.
Single-step solution deployment
If you don’t want to set up each component individually, you can use the single-step AWS CloudFormation template. The single-step template is a master template that uses nested stacks (additional templates) to launch and configure all the resources for the solution in one go.
To deploy the single-step template into your account, choose Launch Stack:
This takes you to the Create stack wizard in the AWS CloudFormation console. The template is launched in the US East (N. Virginia) Region by default. Do not change to a different Region because the template is designed to work only in us-east-1 (N. Virginia).
On the Select Template page, keep the default URL for the AWS CloudFormation template, and then choose Next.
On the Specify Details page, review the parameters for the template. Provide values for the parameters that require input (for more information, see the parameters table that follows).
The following parameters are available in this template.
Parameter
Default
Description
Domain Controller name
DC1
NetBIOS (hostname) name of the Active Directory server. This name can be up to 15 characters long.
Active Directory domain
example.com
Fully qualified domain name (FQDN) of the forest root domain (for example, example.com).
Domain NetBIOS name
EXAMPLE
NetBIOS name of the domain for users of earlier versions of Windows. This name can be up to 15 characters long.
Domain admin user
CrossRealmAdmin
User name for the account that is added as domain administrator. This account is separate from the default administrator account.
Domain admin password
Requires input
Password for the domain admin user. Must be at least eight characters including letters, numbers, and symbols.
Key pair name
Requires input
Name of an existing key pair, which enables you to connect securely to your instance after it launches.
Instance type
m4.xlarge
Instance type for the domain controller and the Amazon EMR cluster.
Allowed IP address
10.0.0.0/16
The client IP address can that can reach your cluster. Specify an IP address range in CIDR notation (for example, 203.0.113.5/32). By default, only the VPC CIDR (10.0.0.0/16) can reach the cluster. Be sure to add your client IP range so that you can connect to the cluster using SSH.
EMR Kerberos realm
EC2.INTERNAL
Cluster’s Kerberos realm name. By default, the realm name is derived from the cluster’s VPC domain name in uppercase letters (for example, EC2.INTERNAL is the default VPC domain name in the us-east-1 Region).
Trusted AD domain
EXAMPLE.COM
The Active Directory (AD) domain that you want to trust. This is the same as the “Active Directory domain.” However, it must use all uppercase letters (for example, EXAMPLE.COM).
Cross-realm trust password
Requires input
Password that you want to use for your cross-realm trust.
Instance count
2
The number of instances (core nodes) for the cluster.
EMR applications
Hadoop, Spark, Ganglia, Hive
Comma separated list of applications to install on the cluster.
After you specify the template details, choose Next. On the Options page, choose Next again. On the Review page, select the I acknowledge that AWS CloudFormation might create IAM resources with custom names check box, and then choose Create.
It takes approximately 45 minutes for the deployment to complete. When the stack launch is complete, it will return outputs with information about the resources that were created. Note the outputs and skip to the Managing and testing the solution section. You can view the stack outputs on the AWS Management Console or by using the following AWS CLI command:
This section describes how to use AWS CloudFormation templates to perform each step separately in the solution.
Create and configure an Amazon VPC
In order for you to establish a cross-realm trust between an Amazon EMR Kerberos realm and an Active Directory domain, your Amazon VPC must meet the following requirements:
The subnet used for the Amazon EMR cluster must have a CIDR block of fewer than nine digits (for example, 10.0.1.0/24).
Both DNS resolution and DNS hostnames must be enabled (set to “yes”).
The Active Directory domain controller must be the DNS server for instances in the Amazon VPC (this is configured in the next step).
To use the AWS CloudFormation template to create and configure an Amazon VPC with the prerequisites listed previously, choose Launch Stack:
Note: If you want to create the VPC manually (without using AWS CloudFormation), see Set Up the VPC and Subnet in the Amazon EMR documentation.
Launching this stack creates the following AWS resources:
Amazon VPC with CIDR block 10.0.0.0/16 (Name: CrossRealmVPC)
Internet Gateway (Name: CrossRealmGateway)
Public subnet with CIDR block 10.0.1.0/24 (Name: CrossRealmSubnet)
Security group allowing inbound access from the VPC’s subnets (Name tag: CrossRealmSecurityGroup)
When the stack launch is complete, it should return outputs similar to the following.
Key
Value example
Description
SubnetID
subnet-xxxxxxxx
The subnet for the Active Directory domain controller and the EMR cluster.
SecurityGroup
sg-xxxxxxxx
The security group for the Active Directory domain controller.
VPCID
vpc-xxxxxxxx
The Active Directory domain controller and EMR cluster will be launched on this VPC.
Note the outputs because they are used in the next step. You can view the stack outputs on the AWS Management Console or by using the following AWS CLI command:
Launch and configure an Active Directory domain controller
In this step, you use an AWS CloudFormation template to automatically launch and configure a new Active Directory domain controller and cross-realm trust.
Note: There are various ways to install and configure an Active Directory domain controller. For details on manually launching and installing a domain controller without AWS CloudFormation, see Step 2: Launch and Install the AD Domain Controller in the Amazon EMR documentation.
In addition to launching and configuring an Active Directory domain controller and cross-realm trust, this AWS CloudFormation template also sets the domain controller as the DNS server (name server) for your Amazon VPC. In other words, the template creates a new DHCP option-set for the VPC where it’s being deployed to, and it sets the private IP of the domain controller as the name server for that new DHCP option set.
IMPORTANT: You should not use this template on a production VPC with existing resources like Amazon EC2 instances. When you launch this stack, make sure that you use the new environment and resources (Amazon VPC, subnet, and security group) that were created in the Create and configure an Amazon VPC step.
To launch this stack, choose Launch Stack:
The following table contains information about the parameters available in this template. Review the parameters for the template and provide values for those that require input.
Parameter
Default
Description
VPC ID
Requires input
Launch the domain controller on this VPC (for example, use the VPC created in the Create and configure an Amazon VPC step).
Subnet ID
Requires input
Subnet used for the domain controller (for example, use the subnet created in the Create and configure an Amazon VPC step).
Security group ID
Requires input
Security group (SG) for the domain controller (for example, use the SG created in the Create and configure an Amazon VPC step).
Domain Controller name
DC1
NetBIOS name of the Active Directory server (up to 15 characters).
Active Directory domain
example.com
Fully qualified domain name (FQDN) of the forest root domain (for example, example.com).
Domain NetBIOS name
EXAMPLE
NetBIOS name of the domain for users of earlier versions of Windows. This name can be up to 15 characters long.
Domain admin user
CrossRealmAdmin
User name for the account that is added as domain administrator. This account is separate from the default administrator account.
Domain admin password
Requires input
Password for the domain admin user. Must be at least eight characters including letters, numbers, and symbols.
Key pair name
Requires input
Name of an existing EC2 key pair to enable access to the domain controller instance.
Instance type
m4.xlarge
Instance type for the domain controller.
EMR Kerberos realm
EC2.INTERNAL
Cluster’s Kerberos realm name. By default, the realm name is derived from the cluster’s VPC domain name in uppercase letters (for example, EC2.INTERNAL is the default VPC domain name in the us-east-1 Region).
Cross-realm trust password
Requires input
Password that you want to use for your cross-realm trust.
It takes 25–30 minutes for this stack to be created. When it’s complete, note the stack’s outputs, and then move to the next step: Launch an EMR cluster with Kerberos enabled.
Create a security configuration and launch an Amazon EMR cluster with Kerberos enabled
To launch a kerberized Amazon EMR cluster, you first need to create a security configuration containing the cross-realm trust configuration. You then specify cluster-specific Kerberos attributes when launching the cluster.
In this step, you use AWS CloudFormation to launch and configure a kerberized Amazon EMR cluster with a cross-realm trust. If you want to manually launch and configure a cluster with Kerberos enabled, see Step 6: Launch a Kerberized EMR Cluster in the Amazon EMR documentation.
Note: At the time of this writing, AWS CloudFormation does not yet support launching Amazon EMR clusters with Kerberos authentication enabled. To overcome this limitation, I created a template that uses an AWS Lambda-backed custom resource to launch and configure the Amazon EMR cluster with Kerberos enabled. If you use this template, there’s nothing else that you need to do. Just keep in mind that the template creates and invokes an AWS Lambda function (custom resource) to launch the cluster.
To create a cross-realm trust security configuration and launch a kerberized Amazon EMR cluster using AWS CloudFormation, choose Launch Stack:
The following table lists and describes the template parameters for deploying a kerberized Amazon EMR cluster and configuring a cross-realm trust.
Parameter
Default
Description
Active Directory domain
example.com
The Active Directory domain that you want to establish the cross-realm trust with.
Domain admin user (joiner user)
CrossRealmAdmin
The user name of an Active Directory domain user with privileges to join domains/computers to the Active Directory domain (joiner user).
Domain admin password
Requires input
Password of the joiner user.
Cross-realm trust password
Requires input
Password of your cross-realm trust.
EC2 key pair name
Requires input
Name of an existing key pair, which enables you to connect securely to your cluster after it launches.
Subnet ID
Requires input
Subnet that you want to use for your Amazon EMR cluster (for example, choose the subnet created in the Create and configure an Amazon VPC step).
Security group ID
Requires input
Security group that you want to use for your Amazon EMR cluster (for example, choose the security group created in the Create and configure an Amazon VPC step).
Instance type
m4.xlarge
The instance type that you want to use for the cluster nodes.
Instance count
2
The number of instances (core nodes) for the cluster.
Allowed IP address
10.0.0.0/16
The client IP address can that can reach your cluster. Specify an IP address range in CIDR notation (for example, 203.0.113.5/32). By default, only the VPC CIDR (10.0.0.0/16) can reach the cluster. Be sure to add your client IP range so that you can connect to the cluster using SSH.
EMR applications
Hadoop, Spark, Ganglia, Hive
Comma separated list of the applications that you want installed on the cluster.
EMR Kerberos realm
EC2.INTERNAL
Cluster’s Kerberos realm name. By default, the realm name is derived from the cluster’s VPC domain name in uppercase letters (for example, EC2.INTERNAL is the default VPC domain name in the us-east-1 Region).
Trusted AD domain
EXAMPLE.COM
The Active Directory domain that you want to trust. This name is the same as the “AD domain name.” However, it must use all uppercase letters (for example, EXAMPLE.COM).
It takes 10–15 minutes for this stack to be created. When it’s complete, note the stack’s outputs, and then move to the next section: Managing and testing the solution.
Managing and testing the solution
Now that you’ve configured and built the solution, it’s time to test it by connecting to a cluster using Active Directory credentials.
SSH to a cluster using Active Directory credentials (single sign-on)
After you launch a kerberized Amazon EMR cluster, if you used the AWS CloudFormation templates and added your client IP range to the Allowed IP address parameter, you should be able to connect to the cluster using an SSH client and your Active Directory user credentials. If you have trouble connecting to the cluster using SSH, check the cluster’s security group to make sure that it allows inbound SSH connection (TCP port 22) from your client’s IP address (source).
The following steps assume that you’re using a client such as OpenSSH. If you’re using a different SSH application (for example, PuTTY), consult the application-specific documentation.
Note: Because the cluster was launched with a cross-realm trust configuration, you don’t need to use a private key (.pem file) when you connect to it as a domain user using SSH.
To connect to your Amazon EMR cluster as an Active Directory user using SSH, run the following command. Replace ad_user with the domain admin user that you created while setting up the domain controller and replace master_node_URL with the cluster’s URL (see the stack’s outputs to find this information):
$ ssh -l <ad_user> <master_node_URL>
If your SSH client is configured to use a key as the preferred authentication method, the login might fail. If that’s the case, you can add the following options to your SSH command to force the SSH connection to use password authentication:
After a domain user connects to the cluster using SSH, if this is the first that the user is connecting, a local home directory is created for that user. In addition to creating a local home directory, if you used the create-hdfs-home-ba.sh bootstrap action when launching the cluster (done by default if you used the AWS CloudFormation template to launch a kerberized cluster), an HDFS user home directory is also automatically created.
Note: If you manually launched the cluster and did not use the create-hdfs-home-ba.sh bootstrap action, then you’ll need to manually create HDFS user home directories for your users.
When you connect to the cluster using SSH for the first time (as a domain user), you should see the following messages if the HDFS home directory for your domain user was successfully created:
Running jobs on a kerberized Amazon EMR cluster
To run a job on a kerberized cluster, the user submitting the job must first be authenticated. If you followed the previous section to connect to your cluster as an Active Directory user using SSH, the user should be authenticated automatically.
If running the klist command returns a “No credentials cache found” message, it means that the user is not authenticated (the user doesn’t have a Kerberos ticket). You can re-authenticate a user at any time by running the following command (be sure to use all uppercase letters for the Active Directory domain):
$ kinit <username>@<AD_DOMAIN>
When the user is authenticated, they can submit jobs just like they would on a non-kerberized cluster.
Auditing jobs
Another advantage that Kerberos can provide is that you can easily tell which user ran a particular job. For example, connect (using SSH) to a kerberized cluster with an Active Directory user, and submit the SparkPi sample application:
$ spark-example SparkPi
After running the SparkPi application, go to the Amazon EMR console and choose your cluster. Then choose the Application history tab. There you can see information about the application, including the user that submitted the job:
Common issues
Although it would be hard to cover every possible Kerberos issue, this section covers some of the more common issues that might occur and ways to fix them.
Issue 1: You can successfully connect and get authenticated on a cluster. However, whenever you try running job, it fails with an error similar to the following:
Solution: Make sure that an HDFS home directory for the user was created and that it has the right permissions.
Issue 2: You can successfully connect to the cluster, but you can’t run any Hadoop or HDFS commands.
Solution: Use the klist command to confirm whether the user is authenticated and has a valid Kerberos ticket. Use the kinit command to re-authenticate a user.
Issue 3: You can’t connect (using SSH) to the cluster using Active Directory user credentials, but you can manually authenticate the user with kinit.
Solution: Make sure that the Active Directory domain controller is the DNS server (name server) for the cluster nodes.
Cleaning up
After completing and testing this solution, remember to clean up the resources. If you used the AWS CloudFormation templates to create the resources, then use the AWS CloudFormation console or AWS CLI/SDK to delete the stacks. Deleting a stack also deletes the resources created by that stack.
If one of your stacks does not delete, make sure that there are no dependencies on the resources created by that stack. For example, if you deployed an Amazon VPC using AWS CloudFormation and then deployed a domain controller into that VPC using a different AWS CloudFormation stack, you must first delete the domain controller stack before the VPC stack can be deleted.
Summary
The ability to authenticate users and services with Kerberos not only allows you to secure your big data applications, but it also enables you to easily integrate Amazon EMR clusters with an Active Directory environment. This post showed how you can use Kerberos on Amazon EMR to create a single sign-on solution where Active Directory domain users can seamlessly access Amazon EMR clusters and run big data applications. We also showed how you can use AWS CloudFormation to automate the deployment of this solution.
We recently made some updates to AWS Training and Certification to make it easier for you to build your cloud skills and to learn about many of the new services that we launched at AWS re:Invent.
Free AWS Digital Training You can now find over 100 new digital training classes at aws.training, all with unlimited access at no charge.
The courses were built by AWS experts and allow you to learn AWS at your own pace, helping you to build foundational knowledge for dozens of AWS services and solutions. You can also access some more advanced training on Machine Learning and Storage.
You can browse through the available topics, enroll in one that interests you, watch it, and track your progress by looking at your transcript:
AWS Certified Cloud Practitioner Our newest certification exam, AWS Certified Cloud Practitioner, lets you validate your overall understanding of the AWS Cloud with an industry-recognized credential. It covers four domains: cloud concepts, security, technology, and billing and pricing. We recommend that you have at least six months of experience (or equivalent training) with the AWS Cloud in any role, including technical, managerial, sales, purchasing, or financial.
To help you prepare for this exam, take our new AWS Cloud Practitioner Essentials course , one of the new AWS digital training courses. This course will give you an overview of cloud concepts, AWS services, security, architecture, pricing, and support. In addition to helping you validate your overall understanding of the AWS Cloud, AWS Certified Cloud Practitioner also serves as a new prerequisite option for the Big Data Specialty and Advanced Networking Specialty certification exams.
Go For It! I’d like to encourage you to check out aws.training and to enroll in our free digital training in order to learn more about AWS and our newest services. You can strengthen your skills, add to your knowledge base, and set a goal of earning your AWS Certified Cloud Practitioner certification in the new year.
Let’s Encrypt had a great year in 2017. We more than doubled the number of active (unexpired) certificates we service to 46 million, we just about tripled the number of unique domains we service to 61 million, and we did it all while maintaining a stellar security and compliance track record. Most importantly though, the Web went from 46% encrypted page loads to 67% according to statistics from Mozilla – a gain of 21% in a single year – incredible. We’re proud to have contributed to that, and we’d like to thank all of the other people and organizations who also worked hard to create a more secure and privacy-respecting Web.
While we’re proud of what we accomplished in 2017, we are spending most of the final quarter of the year looking forward rather than back. As we wrap up our own planning process for 2018, I’d like to share some of our plans with you, including both the things we’re excited about and the challenges we’ll face. We’ll cover service growth, new features, infrastructure, and finances.
Service Growth
We are planning to double the number of active certificates and unique domains we service in 2018, to 90 million and 120 million, respectively. This anticipated growth is due to continuing high expectations for HTTPS growth in general in 2018.
Let’s Encrypt helps to drive HTTPS adoption by offering a free, easy to use, and globally available option for obtaining the certificates required to enable HTTPS. HTTPS adoption on the Web took off at an unprecedented rate from the day Let’s Encrypt launched to the public.
One of the reasons Let’s Encrypt is so easy to use is that our community has done great work making client software that works well for a wide variety of platforms. We’d like to thank everyone involved in the development of over 60 client software options for Let’s Encrypt. We’re particularly excited that support for the ACME protocol and Let’s Encrypt is being added to the Apache httpd server.
Other organizations and communities are also doing great work to promote HTTPS adoption, and thus stimulate demand for our services. For example, browsers are starting to make their users more aware of the risks associated with unencrypted HTTP (e.g. Firefox, Chrome). Many hosting providers and CDNs are making it easier than ever for all of their customers to use HTTPS. Governmentagencies are waking up to the need for stronger security to protect constituents. The media community is working to Secure the News.
New Features
We’ve got some exciting features planned for 2018.
First, we’re planning to introduce an ACME v2 protocol API endpoint and support for wildcard certificates along with it. Wildcard certificates will be free and available globally just like our other certificates. We are planning to have a public test API endpoint up by January 4, and we’ve set a date for the full launch: Tuesday, February 27.
Later in 2018 we plan to introduce ECDSA root and intermediate certificates. ECDSA is generally considered to be the future of digital signature algorithms on the Web due to the fact that it is more efficient than RSA. Let’s Encrypt will currently sign ECDSA keys from subscribers, but we sign with the RSA key from one of our intermediate certificates. Once we have an ECDSA root and intermediates, our subscribers will be able to deploy certificate chains which are entirely ECDSA.
Infrastructure
Our CA infrastructure is capable of issuing millions of certificates per day with multiple redundancy for stability and a wide variety of security safeguards, both physical and logical. Our infrastructure also generates and signs nearly 20 million OCSP responses daily, and serves those responses nearly 2 billion times per day. We expect issuance and OCSP numbers to double in 2018.
Our physical CA infrastructure currently occupies approximately 70 units of rack space, split between two datacenters, consisting primarily of compute servers, storage, HSMs, switches, and firewalls.
When we issue more certificates it puts the most stress on storage for our databases. We regularly invest in more and faster storage for our database servers, and that will continue in 2018.
We’ll need to add a few additional compute servers in 2018, and we’ll also start aging out hardware in 2018 for the first time since we launched. We’ll age out about ten 2u compute servers and replace them with new 1u servers, which will save space and be more energy efficient while providing better reliability and performance.
We’ll also add another infrastructure operations staff member, bringing that team to a total of six people. This is necessary in order to make sure we can keep up with demand while maintaining a high standard for security and compliance. Infrastructure operations staff are systems administrators responsible for building and maintaining all physical and logical CA infrastructure. The team also manages a 24/7/365 on-call schedule and they are primary participants in both security and compliance audits.
Finances
We pride ourselves on being an efficient organization. In 2018 Let’s Encrypt will secure a large portion of the Web with a budget of only $3.0M. For an overall increase in our budget of only 13%, we will be able to issue and service twice as many certificates as we did in 2017. We believe this represents an incredible value and that contributing to Let’s Encrypt is one of the most effective ways to help create a more secure and privacy-respecting Web.
Our 2018 fundraising efforts are off to a strong start with Platinum sponsorships from Mozilla, Akamai, OVH, Cisco, Google Chrome and the Electronic Frontier Foundation. The Ford Foundation has renewed their grant to Let’s Encrypt as well. We are seeking additional sponsorship and grant assistance to meet our full needs for 2018.
We had originally budgeted $2.91M for 2017 but we’ll likely come in under budget for the year at around $2.65M. The difference between our 2017 expenses of $2.65M and the 2018 budget of $3.0M consists primarily of the additional infrastructure operations costs previously mentioned.
Support Let’s Encrypt
We depend on contributions from our community of users and supporters in order to provide our services. If your company or organization would like to sponsor Let’s Encrypt please email us at [email protected]. We ask that you make an individual contribution if it is within your means.
We’re grateful for the industry and community support that we receive, and we look forward to continuing to create a more secure and privacy-respecting Web!
<p>Let’s Encrypt had a great year in 2017. We more than doubled the number of active (unexpired) certificates we service to 46 million, we just about tripled the number of unique domains we service to 61 million, and we did it all while maintaining a stellar security and compliance track record. Most importantly though, <a href="https://letsencrypt.org/stats/">the Web went from 46% encrypted page loads to 67%</a> according to statistics from Mozilla – a gain of 21 percentage points in a single year – incredible. We’re proud to have contributed to that, and we’d like to thank all of the other people and organizations who also worked hard to create a more secure and privacy-respecting Web.</p>
<p>While we’re proud of what we accomplished in 2017, we are spending most of the final quarter of the year looking forward rather than back. As we wrap up our own planning process for 2018, I’d like to share some of our plans with you, including both the things we’re excited about and the challenges we’ll face. We’ll cover service growth, new features, infrastructure, and finances.</p>
<h1 id="service-growth">Service Growth</h1>
<p>We are planning to double the number of active certificates and unique domains we service in 2018, to 90 million and 120 million, respectively. This anticipated growth is due to continuing high expectations for HTTPS growth in general in 2018.</p>
<p>Let’s Encrypt helps to drive HTTPS adoption by offering a free, easy to use, and globally available option for obtaining the certificates required to enable HTTPS. HTTPS adoption on the Web took off at an unprecedented rate from the day Let’s Encrypt launched to the public.</p>
<p>One of the reasons Let’s Encrypt is so easy to use is that our community has done great work making client software that works well for a wide variety of platforms. We’d like to thank everyone involved in the development of over 60 <a href="https://letsencrypt.org/docs/client-options/">client software options for Let’s Encrypt</a>. We’re particularly excited that support for the ACME protocol and Let’s Encrypt is <a href="https://letsencrypt.org/2017/10/17/acme-support-in-apache-httpd.html">being added to the Apache httpd server</a>.</p>
<p>Other organizations and communities are also doing great work to promote HTTPS adoption, and thus stimulate demand for our services. For example, browsers are starting to make their users more aware of the risks associated with unencrypted HTTP (e.g. <a href="https://blog.mozilla.org/security/2017/01/20/communicating-the-dangers-of-non-secure-http/">Firefox</a>, <a href="https://security.googleblog.com/2017/04/next-steps-toward-more-connection.html">Chrome</a>). Many hosting providers and CDNs are making it easier than ever for all of their customers to use HTTPS. <a href="https://https.cio.gov/">Government</a> <a href="https://www.canada.ca/en/treasury-board-secretariat/services/information-technology/strategic-plan-2017-2021.html#toc8-3-2">agencies</a> are waking up to the need for stronger security to protect constituents. The media community is working to <a href="https://securethe.news/">Secure the News</a>.</p>
<h1 id="new-features">New Features</h1>
<p>We’ve got some exciting features planned for 2018.</p>
<p>First, we’re planning to introduce an ACME v2 protocol API endpoint and <a href="https://letsencrypt.org/2017/07/06/wildcard-certificates-coming-jan-2018.html">support for wildcard certificates</a> along with it. Wildcard certificates will be free and available globally just like our other certificates. We are planning to have a public test API endpoint up by January 4, and we’ve set a date for the full launch: Tuesday, February 27.</p>
<p>Later in 2018 we plan to introduce ECDSA root and intermediate certificates. ECDSA is generally considered to be the future of digital signature algorithms on the Web due to the fact that it is more efficient than RSA. Let’s Encrypt will currently sign ECDSA keys from subscribers, but we sign with the RSA key from one of our intermediate certificates. Once we have an ECDSA root and intermediates, our subscribers will be able to deploy certificate chains which are entirely ECDSA.</p>
<h1 id="infrastructure">Infrastructure</h1>
<p>Our CA infrastructure is capable of issuing millions of certificates per day with multiple redundancy for stability and a wide variety of security safeguards, both physical and logical. Our infrastructure also generates and signs nearly 20 million OCSP responses daily, and serves those responses nearly 2 billion times per day. We expect issuance and OCSP numbers to double in 2018.</p>
<p>Our physical CA infrastructure currently occupies approximately 70 units of rack space, split between two datacenters, consisting primarily of compute servers, storage, HSMs, switches, and firewalls.</p>
<p>When we issue more certificates it puts the most stress on storage for our databases. We regularly invest in more and faster storage for our database servers, and that will continue in 2018.</p>
<p>We’ll need to add a few additional compute servers in 2018, and we’ll also start aging out hardware in 2018 for the first time since we launched. We’ll age out about ten 2u compute servers and replace them with new 1u servers, which will save space and be more energy efficient while providing better reliability and performance.</p>
<p>We’ll also add another infrastructure operations staff member, bringing that team to a total of six people. This is necessary in order to make sure we can keep up with demand while maintaining a high standard for security and compliance. Infrastructure operations staff are systems administrators responsible for building and maintaining all physical and logical CA infrastructure. The team also manages a 24/7/365 on-call schedule and they are primary participants in both security and compliance audits.</p>
<h1 id="finances">Finances</h1>
<p>We pride ourselves on being an efficient organization. In 2018 Let’s Encrypt will secure a large portion of the Web with a budget of only $3.0M. For an overall increase in our budget of only 13%, we will be able to issue and service twice as many certificates as we did in 2017. We believe this represents an incredible value and that contributing to Let’s Encrypt is one of the most effective ways to help create a more secure and privacy-respecting Web.</p>
<p>Our 2018 fundraising efforts are off to a strong start with Platinum sponsorships from Mozilla, Akamai, OVH, Cisco, Google Chrome and the Electronic Frontier Foundation. The Ford Foundation has renewed their grant to Let’s Encrypt as well. We are seeking additional sponsorship and grant assistance to meet our full needs for 2018.</p>
<p>We had originally budgeted $2.91M for 2017 but we’ll likely come in under budget for the year at around $2.65M. The difference between our 2017 expenses of $2.65M and the 2018 budget of $3.0M consists primarily of the additional infrastructure operations costs previously mentioned.</p>
<p>We depend on contributions from our community of users and supporters in order to provide our services. If your company or organization would like to <a href="https://letsencrypt.org/become-a-sponsor/">sponsor</a> Let’s Encrypt please email us at <a href="mailto:[email protected]">[email protected]</a>. We ask that you make an <a href="https://letsencrypt.org/donate/">individual contribution</a> if it is within your means.</p>
<p>We’re grateful for the industry and community support that we receive, and we look forward to continuing to create a more secure and privacy-respecting Web!</p>
Threats to your IT infrastructure (AWS accounts & credentials, AWS resources, guest operating systems, and applications) come in all shapes and sizes! The online world can be a treacherous place and we want to make sure that you have the tools, knowledge, and perspective to keep your IT infrastructure safe & sound.
Amazon GuardDuty is designed to give you just that. Informed by a multitude of public and AWS-generated data feeds and powered by machine learning, GuardDuty analyzes billions of events in pursuit of trends, patterns, and anomalies that are recognizable signs that something is amiss. You can enable it with a click and see the first findings within minutes.
How it Works GuardDuty voraciously consumes multiple data streams, including several threat intelligence feeds, staying aware of malicious IP addresses, devious domains, and more importantly, learning to accurately identify malicious or unauthorized behavior in your AWS accounts. In combination with information gleaned from your VPC Flow Logs, AWS CloudTrail Event Logs, and DNS logs, this allows GuardDuty to detect many different types of dangerous and mischievous behavior including probes for known vulnerabilities, port scans and probes, and access from unusual locations. On the AWS side, it looks for suspicious AWS account activity such as unauthorized deployments, unusual CloudTrail activity, patterns of access to AWS API functions, and attempts to exceed multiple service limits. GuardDuty will also look for compromised EC2 instances talking to malicious entities or services, data exfiltration attempts, and instances that are mining cryptocurrency.
GuardDuty operates completely on AWS infrastructure and does not affect the performance or reliability of your workloads. You do not need to install or manage any agents, sensors, or network appliances. This clean, zero-footprint model should appeal to your security team and allow them to green-light the use of GuardDuty across all of your AWS accounts.
Findings are presented to you at one of three levels (low, medium, or high), accompanied by detailed evidence and recommendations for remediation. The findings are also available as Amazon CloudWatch Events; this allows you to use your own AWS Lambda functions to automatically remediate specific types of issues. This mechanism also allows you to easily push GuardDuty findings into event management systems such as Splunk, Sumo Logic, and PagerDuty and to workflow systems like JIRA, ServiceNow, and Slack.
A Quick Tour Let’s take a quick tour. I open up the GuardDuty Console and click on Get started:
Then I confirm that I want to enable GuardDuty. This gives it permission to set up the appropriate service-linked roles and to analyze my logs by clicking on Enable GuardDuty:
My own AWS environment isn’t all that exciting, so I visit the General Settings and click on Generate sample findings to move ahead. Now I’ve got some intriguing findings:
I can click on a finding to learn more:
The magnifying glass icons allow me to create inclusion or exclusion filters for the associated resource, action, or other value. I can filter for all of the findings related to this instance:
I can customize GuardDuty by adding lists of trusted IP addresses and lists of malicious IP addresses that are peculiar to my environment:
After I enable GuardDuty in my administrator account, I can invite my other accounts to participate:
Once the accounts decide to participate, GuardDuty will arrange for their findings to be shared with the administrator account.
I’ve barely scratched the surface of GuardDuty in the limited space and time that I have. You can try it out at no charge for 30 days; after that you pay based on the number of entries it processes from your VPC Flow, CloudTrail, and DNS logs.
Available Now Amazon GuardDuty is available in production form in the US East (Northern Virginia), US East (Ohio), US West (Oregon), US West (Northern California), EU (Ireland), EU (Frankfurt), EU (London), South America (São Paulo), Canada (Central), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Mumbai) Regions and you can start using it today!
Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates are used to secure network communications and establish the identity of websites over the internet. Before issuing a certificate for your website, Amazon must validate that you control the domain name for your site. You can now use AWS Certificate Manager (ACM) Domain Name System (DNS) validation to establish that you control a domain name when requesting SSL/TLS certificates with ACM. Previously ACM supported only email validation, which required the domain owner to receive an email for each certificate request and validate the information in the request before approving it.
With DNS validation, you write a CNAME record to your DNS configuration to establish control of your domain name. After you have configured the CNAME record, ACM can automatically renew DNS-validated certificates before they expire, as long as the DNS record has not changed. To make it even easier to validate your domain, ACM can update your DNS configuration for you if you manage your DNS records with Amazon Route 53. In this blog post, I demonstrate how to request a certificate for a website by using DNS validation. To perform the equivalent steps using the AWS CLI or AWS APIs and SDKs, see AWS Certificate Manager in the AWS CLI Reference and the ACM API Reference.
Requesting an SSL/TLS certificate by using DNS validation
In this section, I walk you through the four steps required to obtain an SSL/TLS certificate through ACM to identify your site over the internet. SSL/TLS provides encryption for sensitive data in transit and authentication by using certificates to establish the identity of your site and secure connections between browsers and applications and your site. DNS validation and SSL/TLS certificates provisioned through ACM are free.
If you previously managed certificates in ACM, you will instead see a table with your certificates and a button to request a new certificate. Choose Request a certificate to request a new certificate.
Type the name of your domain in the Domain name box and choose Next. In this example, I type www.example.com. You must use a domain name that you control. Requesting certificates for domains that you don’t control violates the AWS Service Terms.
Step 2: Select a validation method
With DNS validation, you write a CNAME record to your DNS configuration to establish control of your domain name. Choose DNS validation, and then choose Review.
Step 3: Review your request
Review your request and choose Confirm and request to request the certificate.
Step 4: Submit your request
After a brief delay while ACM populates your domain validation information, choose the down arrow (highlighted in the following screenshot) to display all the validation information for your domain.
ACM displays the CNAME record you must add to your DNS configuration to validate that you control the domain name in your certificate request. If you use a DNS provider other than Route 53 or if you use a different AWS account to manage DNS records in Route 53, copy the DNS CNAME information from the validation information, or export it to a file (choose Export DNS configuration to a file) and write it to your DNS configuration. For information about how to add or modify DNS records, check with your DNS provider. For more information about using DNS with Route 53 DNS, see the Route 53 documentation.
If you manage DNS records for your domain with Route 53 in the same AWS account, choose Create record in Route 53 to have ACM update your DNS configuration for you.
After updating your DNS configuration, choose Continue to return to the ACM table view.
ACM then displays a table that includes all your certificates. The certificate you requested is displayed so that you can see the status of your request. After you write the DNS record or have ACM write the record for you, it typically takes DNS 30 minutes to propagate the record, and it might take several hours for Amazon to validate it and issue the certificate. During this time, ACM shows the Validation status as Pending validation. After ACM validates the domain name, ACM updates the Validation status to Success. After the certificate is issued, the certificate status is updated to Issued. If ACM cannot validate your DNS record and issue the certificate after 72 hours, the request times out, and ACM displays a Timed out validation status. To recover, you must make a new request. Refer to the Troubleshooting Section of the ACM User Guide for instructions about troubleshooting validation or issuance failures.
You now have an ACM certificate that you can use to secure your application or website. For information about how to deploy certificates with other AWS services, see the documentation for Amazon CloudFront, Amazon API Gateway, Application Load Balancers, and Classic Load Balancers. Note that your certificate must be in the US East (N. Virginia) Region to use the certificate with CloudFront.
ACM automatically renews certificates that are deployed and in use with other AWS services as long as the CNAME record remains in your DNS configuration. To learn more about ACM DNS validation, see the ACM FAQs and the ACM documentation.
If you have comments about this post, submit them in the “Comments” section below. If you have questions about this blog post, start a new thread on the ACM forum or contact AWS Support.
Update from March 28, 2018: We updated the Amazon Trust Services table by replacing an out-of-date value with a new value.
Transport Layer Security (TLS, formerly called Secure Sockets Layer [SSL]) is essential for encrypting information that is exchanged on the internet. For example, Amazon.com uses TLS for all traffic on its website, and AWS uses it to secure calls to AWS services.
An electronic document called a certificate verifies the identity of the server when creating such an encrypted connection. The certificate helps establish proof that your web browser is communicating securely with the website that you typed in your browser’s address field. Certificate Authorities, also known as CAs, issue certificates to specific domains. When a domain presents a certificate that is issued by a trusted CA, your browser or application knows it’s safe to make the connection.
In January 2016, AWS launched AWS Certificate Manager (ACM), a service that lets you easily provision, manage, and deploy SSL/TLS certificates for use with AWS services. These certificates are available for no additional charge through Amazon’s own CA: Amazon Trust Services. For browsers and other applications to trust a certificate, the certificate’s issuer must be included in the browser’s trust store, which is a list of trusted CAs. If the issuing CA is not in the trust store, the browser will display an error message (see an example) and applications will show an application-specific error. To ensure the ubiquity of the Amazon Trust Services CA, AWS purchased the Starfield Services CA, a root found in most browsers and which has been valid since 2005. This means you shouldn’t have to take any action to use the certificates issued by Amazon Trust Services.
AWS has been offering free certificates to AWS customers from the Amazon Trust Services CA. Now, AWS is in the process of moving certificates for services such as Amazon EC2 and Amazon DynamoDB to use certificates from Amazon Trust Services as well. Most software doesn’t need to be changed to handle this transition, but there are exceptions. In this blog post, I show you how to verify that you are prepared to use the Amazon Trust Services CA.
How to tell if the Amazon Trust Services CAs are in your trust store
The following table lists the Amazon Trust Services certificates. To verify that these certificates are in your browser’s trust store, click each Test URL in the following table to verify that it works for you. When a Test URL does not work, it displays an error similar to this example.
* Note: Amazon doesn’t own this root and doesn’t have a test URL for it. The certificate can be downloaded from here.
You can calculate the SHA-256 hash of Subject Public Key Information as follows. With the PEM-encoded certificate stored in certificate.pem, run the following openssl commands:
As an example, with the Starfield Class 2 Certification Authority self-signed cert in a PEM encoded file sf-class2-root.crt, you can use the following openssl commands:
What to do if the Amazon Trust Services CAs are not in your trust store
If your tests of any of the Test URLs failed, you must update your trust store. The easiest way to update your trust store is to upgrade the operating system or browser that you are using.
You will find the Amazon Trust Services CAs in the following operating systems (release dates are in parentheses):
Microsoft Windows versions that have January 2005 or later updates installed, Windows Vista, Windows 7, Windows Server 2008, and newer versions
Mac OS X 10.4 with Java for Mac OS X 10.4 Release 5, Mac OS X 10.5 and newer versions
Red Hat Enterprise Linux 5 (March 2007), Linux 6, and Linux 7 and CentOS 5, CentOS 6, and CentOS 7
Ubuntu 8.10
Debian 5.0
Amazon Linux (all versions)
Java 1.4.2_12, Java 5 update 2, and all newer versions, including Java 6, Java 7, and Java 8
All modern browsers trust Amazon’s CAs. You can update the certificate bundle in your browser simply by updating your browser. You can find instructions for updating the following browsers on their respective websites:
If your application is using a custom trust store, you must add the Amazon root CAs to your application’s trust store. The instructions for doing this vary based on the application or platform. Please refer to the documentation for the application or platform you are using.
AWS SDKs and CLIs
Most AWS SDKs and CLIs are not impacted by the transition to the Amazon Trust Services CA. If you are using a version of the Python AWS SDK or CLI released before October 29, 2013, you must upgrade. The .NET, Java, PHP, Go, JavaScript, and C++ SDKs and CLIs do not bundle any certificates, so their certificates come from the underlying operating system. The Ruby SDK has included at least one of the required CAs since June 10, 2015. Before that date, the Ruby V2 SDK did not bundle certificates.
Certificate pinning
If you are using a technique called certificate pinning to lock down the CAs you trust on a domain-by-domain basis, you must adjust your pinning to include the Amazon Trust Services CAs. Certificate pinning helps defend you from an attacker using misissued certificates to fool an application into creating a connection to a spoofed host (an illegitimate host masquerading as a legitimate host). The restriction to a specific, pinned certificate is made by checking that the certificate issued is the expected certificate. This is done by checking that the hash of the certificate public key received from the server matches the expected hash stored in the application. If the hashes do not match, the code stops the connection.
AWS recommends against using certificate pinning because it introduces a potential availability risk. If the certificate to which you pin is replaced, your application will fail to connect. If your use case requires pinning, we recommend that you pin to a CA rather than to an individual certificate. If you are pinning to an Amazon Trust Services CA, you should pin to all CAs shown in the table earlier in this post.
If you have comments about this post, submit them in the “Comments” section below. If you have questions about this post, start a new thread on the ACM forum.
Amazon SES now adds DMARC verdicts to incoming emails, and publishes aggregate DMARC reports to domain owners. These two new features will help combat email spoofing and phishing, making the email ecosystem a safer and more secure place.
What is DMARC?
DMARC stands for Domain-based Message Authentication, Reporting, and Conformance. The DMARC standard was designed to prevent malicious actors from sending messages that appear to be from legitimate senders. Domain owners can tell email receivers how to handle unauthenticated messages that appear to be from their domains. The DMARC standard also specifies certain reports that email senders and receivers send to each other. The cooperative nature of this reporting process helps improve the email authentication infrastructure.
How does Amazon SES Implement DMARC?
When you receive an email message through Amazon SES, the headers of that message will include a DMARC policy verdict alongside the DKIM and SPF verdicts (both of which are already present). This additional information helps you verify the authenticity of all email messages you receive.
Messages you receive through Amazon SES will contain one of the following DMARC verdicts:
PASS – The message passed DMARC authentication.
FAIL – The message failed DMARC authentication.
GRAY – The sending domain does not have a DMARC policy.
PROCESSING_FAILED – An issue occurred that prevented Amazon SES from providing a DMARC verdict.
If the DMARC verdict is FAIL, Amazon SES will also provide information about the sending domain’s DMARC settings. In this situation, you will see one of the following verdicts:
NONE – The owner of the sending domain requests that no specific action be taken on messages that fail DMARC authentication.
QUARANTINE – The owner of the sending domain requests that messages that fail DMARC authentication be treated by receivers as suspicious.
REJECT – The owner of the sending domain requests that messages that fail DMARC authentication be rejected.
In addition to publishing the DMARC verdict on each incoming message, Amazon SES now sends DMARC aggregate reports to domain owners. These reports help domain owners identify systemic authentication failures, and avoid potential domain spoofing attacks.
Note: Domain owners only receive aggregate information about emails that do not pass DMARC authentication. These reports, known as RUA reports, only include information about the IP addresses that send unauthenticated emails to you. These reports do not include information about legitimate email senders.
How do I configure DMARC?
As is the case with SPF and DKIM, domain owners must publish their DMARC policies as DNS records for their domains. For more information about setting up DMARC, see Complying with DMARC Using Amazon SES in the Amazon SES Developer Guide.
DMARC reporting is now available in the following AWS Regions: US West (Oregon), US East (N. Virginia), and EU (Ireland). You can find more information about the dmarcVerdict and dmarcPolicy objects in the Amazon SES Developer Guide. The Developer Guide also includes a sample Lambda function that you can use to bounce incoming emails that fail DMARC authentication.
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.