Backblaze is growing, and with it our need to cater to a lot of different use cases that our customers bring to us. We needed a Solutions Engineer to help out, and after a long search we’ve hired our first one! Lets learn a bit more about Nathan shall we?
What is your Backblaze Title? Solutions Engineer. Our customers bring a thousand different use cases to both B1 and B2, and I’m here to help them figure out how best to make those use cases a reality. Also, any odd jobs that Nilay wants me to do.
Where are you originally from? I am native to the San Francisco Bay Area, studying mathematics at UC Santa Cruz, and then computer science at California University of Hayward (which has since renamed itself California University of the East Hills. I observe that it’s still in Hayward).
What attracted you to Backblaze? As a stable, growing company with huge growth and even bigger potential, the business model is attractive, and the team is outstanding. Add to that the strong commitment to transparency, and it’s a hard company to resist. We can store – and restore – data while offering superior reliability at an economic advantage to do-it-yourself, and that’s a great place to be.
What do you expect to learn while being at Backblaze? Everything I need to, but principally how our customers choose to interact with web storage. Storage isn’t a solution per se, but it’s an important component of any persistent solution. I’m looking forward to working with all the different concepts our customers have to make use of storage.
Where else have you worked? All sorts of places, but I’ll admit publicly to EMC, Gemalto, and my own little (failed, alas) startup, IC2N. I worked with low-level document imaging.
Where did you go to school? UC Santa Cruz, BA Mathematics CU Hayward, Master of Science in Computer Science.
What’s your dream job? Sipping tea in the California redwood forest. However, solutions engineer at Backblaze is a good second choice!
Favorite place you’ve traveled? Ashland, Oregon, for the Oregon Shakespeare Festival and the marble caves (most caves form from limestone).
Favorite hobby? Theater. Pathfinder. Writing. Baking cookies and cakes.
Of what achievement are you most proud? Marrying the most wonderful man in the world.
Star Trek or Star Wars? Star Trek’s utopian science fiction vision of humanity and science resonates a lot more strongly with me than the dystopian science fantasy of Star Wars.
Coke or Pepsi? Neither. I’d much rather have a cup of jasmine tea.
Favorite food? It varies, but I love Indian and Thai cuisine. Truly excellent Italian food is marvelous – wood fired pizza, if I had to pick only one, but the world would be a boring place with a single favorite food.
Why do you like certain things? If I knew that, I’d be in marketing.
Anything else you’d like you’d like to tell us? If you haven’t already encountered the amazing authors Patricia McKillip and Lois McMasters Bujold – go encounter them. Be happy.
There’s nothing wrong with a nice cup of tea and a long game of Pathfinder. Sign us up! Welcome to the team Nathan!
Doin’ game stuff. Probably going to be quiet for a few weeks still.
alice: Actually wrote a decent amount of stuff, though fairly haphazardly. Finally kind of getting into the groove here. Still contemplating more interesting ways to offer choices, without turning the game into a combinatorial explosion.
art: Did some doodles. Not as frequently as I’d like, and mostly not published, but I did some, and that’s nice.
fox flux: Revisited the parallax forest background briefly. Made some progress, but talked to glip and maybe it’s not the right approach in the first place? Not thinking about it too seriously right now, regardless.
idchoppers: Miraculously, I got multi-polygon splitting finally working… and then hit a panic when there are coincident segments, which offhand I’m not sure how to fix. Sigh.
You can now enable your on-premises users administer your AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD. Using an Active Directory (AD) trust and the new AWS delegated AD security groups, you can grant administrative permissions to your on-premises users by managing group membership in your on-premises AD directory. This simplifies how you manage who can perform administration. It also makes it easier for your administrators because they can sign in to their existing workstation with their on-premises AD credential to administer your AWS Managed Microsoft AD.
AWS created new domain local AD security groups (AWS delegated groups) in your AWS Managed Microsoft AD directory. Each AWS delegated group has unique AD administrative permissions. Users that are members in the new AWS delegated groups get permissions to perform administrative tasks, such as add users, configure fine-grained password policies and enable Microsoft enterprise Certificate Authority. Because the AWS delegated groups are domain local in scope, you can use them through an AD Trust to your on-premises AD. This eliminates the requirement to create and use separate identities to administer your AWS Managed Microsoft AD. Instead, by adding selected on-premises users to desired AWS delegated groups, you can grant your administrators some or all of the permissions. You can simplify this even further by adding on-premises AD security groups to the AWS delegated groups. This enables you to add and remove users from your on-premises AD security group so that they can manage administrative permissions in your AWS Managed Microsoft AD.
In this blog post, I will show you how to delegate permissions to your on-premises users to perform an administrative task–configuring fine-grained password policies–in your AWS Managed Microsoft AD directory. You can follow the steps in this post to delegate other administrative permissions, such as configuring group Managed Service Accounts and Kerberos constrained delegation, to your on-premises users.
Background
Until now, AWS Managed Microsoft AD delegated administrative permissions for your directory by creating AD security groups in your Organization Unit (OU) and authorizing these AWS delegated groups for common administrative activities. The admin user in your directory created user accounts within your OU, and granted these users permissions to administer your directory by adding them to one or more of these AWS delegated groups.
However, if you used your AWS Managed Microsoft AD with a trust to an on-premises AD forest, you couldn’t add users from your on-premises directory to these AWS delegated groups. This is because AWS created the AWS delegated groups with global scope, which restricts adding users from another forest. This necessitated that you create different user accounts in AWS Managed Microsoft AD for the purpose of administration. As a result, AD administrators typically had to remember additional credentials for AWS Managed Microsoft AD.
To address this, AWS created new AWS delegated groups with domain local scope in a separate OU called AWS Delegated Groups. These new AWS delegated groups with domain local scope are more flexible and permit adding users and groups from other domains and forests. This allows your admin user to delegate your on-premises users and groups administrative permissions to your AWS Managed Microsoft AD directory.
Note: If you already have an existing AWS Managed Microsoft AD directory containing the original AWS delegated groups with global scope, AWS preserved the original AWS delegated groups in the event you are currently using them with identities in AWS Managed Microsoft AD. AWS recommends that you transition to use the new AWS delegated groups with domain local scope. All newly created AWS Managed Microsoft AD directories have the new AWS delegated groups with domain local scope only.
Now, I will show you the steps to delegate administrative permissions to your on-premises users and groups to configure fine-grained password policies in your AWS Managed Microsoft AD directory.
Prerequisites
For this post, I assume you are familiar with AD security groups and how security group scope rules work. I also assume you are familiar with AD trusts.
The instructions in this blog post require you to have the following components running:
An active AWS Managed Microsoft AD directory. To create a directory, follow the steps in Creating an AWS Managed Microsoft AD directory. You also need to know the password for the admin account so that you can add other users and groups to the AWS created AD security groups in the AWS Managed Microsoft AD directory.
An existing on-premises AD directory. Your on-premises AD directory must contain a user that you want to delegate permissions to manage your AWS Managed Microsoft AD directory.
A machine joined to your on-premises AD directory with ADUC installed. You can install ADUC by installing Active Directory Administrative Tools on a Windows computer that you joined to your on-premises AD domain.
Solution overview
I will now show you how to manage which on-premises users have delegated permissions to administer your directory by efficiently using on-premises AD security groups to manage these permissions. I will do this by:
Adding on-premises groups to an AWS delegated group. In this step, you sign in to management instance connected to AWS Managed Microsoft AD directory as admin user and add on-premises groups to AWS delegated groups.
Administer your AWS Managed Microsoft AD directory as on-premises user. In this step, you sign in to a workstation connected to your on-premises AD using your on-premises credentials and administer your AWS Managed Microsoft AD directory.
For the purpose of this blog, I already have an on-premises AD directory (in this case, on-premises.com). I also created an AWS Managed Microsoft AD directory (in this case, corp.example.com) that I use with Amazon RDS for SQL Server. To enable Integrated Windows Authentication to my on-premises.com domain, I established a one-way outgoing trust from my AWS Managed Microsoft AD directory to my on-premises AD directory. To administer my AWS Managed Microsoft AD, I created an Amazon EC2 for Windows Server instance (in this case, Cloud Management). I also have an on-premises workstation (in this case, On-premises Management), that is connected to my on-premises AD directory.
The following diagram represents the relationships between the on-premises AD and the AWS Managed Microsoft AD directory.
The left side represents the AWS Cloud containing AWS Managed Microsoft AD directory. I connected the directory to the on-premises AD directory via a 1-way forest trust relationship. When AWS created my AWS Managed Microsoft AD directory, AWS created a group called AWS Delegated Fine Grained Password Policy Administrators that has permissions to configure fine-grained password policies in AWS Managed Microsoft AD.
The right side of the diagram represents the on-premises AD directory. I created a global AD security group called On-premises fine grained password policy admins and I configured it so all members can manage fine grained password policies in my on-premises AD. I have two administrators in my company, John and Richard, who I added as members of On-premises fine grained password policy admins. I want to enable John and Richard to also manage fine grained password policies in my AWS Managed Microsoft AD.
While I could add John and Richard to the AWS Delegated Fine Grained Password Policy Administrators individually, I want a more efficient way to delegate and remove permissions for on-premises users to manage fine grained password policies in my AWS Managed Microsoft AD. In fact, I want to assign permissions to the same people that manage password policies in my on-premises directory.
To do this, I will:
As admin user, add the On-premises fine grained password policy admins as member of the AWS Delegated Fine Grained Password Policy Administrators security group from my Cloud Management machine.
Manage who can administer password policies in my AWS Managed Microsoft AD directory by adding and removing users as members of the On-premises fine grained password policy admins. Doing so enables me to perform all my delegation work in my on-premises directory without the need to use a remote desktop protocol (RDP) session to my Cloud Management instance. In this case, Richard, who is a member of On-premises fine grained password policy admins group can now administer AWS Managed Microsoft AD directory from On-premises Management workstation.
Although I’m showing a specific case using fine grained password policy delegation, you can do this with any of the new AWS delegated groups and your on-premises groups and users.
Let’s get started.
Step 1 – Add on-premises groups to AWS delegated groups
In this step, open an RDP session to the Cloud Management instance and sign in as the admin user in your AWS Managed Microsoft AD directory. Then, add your users and groups from your on-premises AD to AWS delegated groups in AWS Managed Microsoft AD directory. In this example, I do the following:
Sign in to the Cloud Management instance with the user name admin and the password that you set for the admin user when you created your directory.
Open the Microsoft Windows Server Manager and navigate to Tools > Active Directory Users and Computers.
Switch to the tree view and navigate to corp.example.com > AWS Delegated Groups. Right-click AWS Delegated Fine Grained Password Policy Administrators and select Properties.
In the AWS Delegated Fine Grained Password Policy window, switch to Members tab and choose Add.
In the Select Users, Contacts, Computers, Service Accounts, or Groups window, choose Locations.
In the Locations window, select on-premises.com domain and choose OK.
In the Enter the object names to select box, enter on-premises fine grained password policy admins and choose Check Names.
Because I have a 1-way trust from AWS Managed Microsoft AD to my on-premises AD, Windows prompts me to enter credentials for an on-premises user account that has permissions to complete the search. If I had a 2-way trust and the admin account in my AWS Managed Microsoft AD has permissions to read my on-premises directory, Windows will not prompt me.In the Windows Security window, enter the credentials for an account with permissions for on-premises.com and choose OK.
Click OK to add On-premises fine grained password policy admins group as a member of the AWS Delegated Fine Grained Password Policy Administrators group in your AWS Managed Microsoft AD directory.
At this point, any user that is a member of On-premises fine grained password policy admins group has permissions to manage password policies in your AWS Managed Microsoft AD directory.
Step 2 – Administer your AWS Managed Microsoft AD as on-premises user
Any member of the on-premises group(s) that you added to an AWS delegated group inherited the permissions of the AWS delegated group.
In this example, Richard signs in to the On-premises Management instance. Because Richard inherited permissions from Delegated Fine Grained Password Policy Administrators, he can now administer fine grained password policies in the AWS Managed Microsoft AD directory using on-premises credentials.
Sign in to the On-premises Management instance as Richard.
Open the Microsoft Windows Server Manager and navigate to Tools > Active Directory Users and Computers.
Switch to the tree view, right-click Active Directory Users and Computers, and then select Change Domain.
In the Change Domain window, enter corp.example.com, and then choose OK.
You’ll be connected to your AWS Managed Microsoft AD domain:
Richard can now administer the password policies. Because John is also a member of the AWS delegated group, John can also perform password policy administration the same way.
In future, if Richard moves to another division within the company and you hire Judy as a replacement for Richard, you can simply remove Richard from On-premises fine grained password policy admins group and add Judy to this group. Richard will no longer have administrative permissions, while Judy can now administer password policies for your AWS Managed Microsoft AD directory.
Summary
We’ve tried to make it easier for you to administer your AWS Managed Microsoft AD directory by creating AWS delegated groups with domain local scope. You can add your on-premises AD groups to the AWS delegated groups. You can then control who can administer your directory by managing group membership in your on-premises AD directory. Your administrators can sign in to their existing on-premises workstations using their on-premises credentials and administer your AWS Managed Microsoft AD directory. I encourage you to explore the new AWS delegated security groups by using Active Directory Users and Computers from the management instance for your AWS Managed Microsoft AD. To learn more about AWS Directory Service, see the AWS Directory Service home page. If you have questions, please post them on the Directory Service forum. If you have comments about this post, submit them in the “Comments” section below.
Many enterprises use Microsoft Active Directory to manage users, groups, and computers in a network. And a question is asked frequently: How can Active Directory users access big data workloads running on Amazon EMR with the same single sign-on (SSO) experience they have when accessing resources in the Active Directory network?
This post walks you through the process of using AWS CloudFormation to set up a cross-realm trust and extend authentication from an Active Directory network into an Amazon EMR cluster with Kerberos enabled. By establishing a cross-realm trust, Active Directory users can use their Active Directory credentials to access an Amazon EMR cluster and run jobs as themselves.
Walkthrough overview
In this example, you build a solution that allows Active Directory users to seamlessly access Amazon EMR clusters and run big data jobs. Here’s what you need before setting up this solution:
A possible limit increase for your account (Note: Usually a limit increase will not be necessary. See the AWS Service Limits documentation if you encounter a limit error while building the solution.)
To make it easier for you to get started, I created AWS CloudFormation templates that automatically configure and deploy the solution for you. The following steps and resources are involved in setting up the solution:
Note: If you want to manually create and configure the components for this solution without using AWS CloudFormation, refer to the Amazon EMR cross-realm documentation. IMPORTANT: The AWS CloudFormation templates used in this post are designed to work only in the us-east-1 (N. Virginia) Region. They are not intended for production use without modification.
Single-step solution deployment
If you don’t want to set up each component individually, you can use the single-step AWS CloudFormation template. The single-step template is a master template that uses nested stacks (additional templates) to launch and configure all the resources for the solution in one go.
To deploy the single-step template into your account, choose Launch Stack:
This takes you to the Create stack wizard in the AWS CloudFormation console. The template is launched in the US East (N. Virginia) Region by default. Do not change to a different Region because the template is designed to work only in us-east-1 (N. Virginia).
On the Select Template page, keep the default URL for the AWS CloudFormation template, and then choose Next.
On the Specify Details page, review the parameters for the template. Provide values for the parameters that require input (for more information, see the parameters table that follows).
The following parameters are available in this template.
Parameter
Default
Description
Domain Controller name
DC1
NetBIOS (hostname) name of the Active Directory server. This name can be up to 15 characters long.
Active Directory domain
example.com
Fully qualified domain name (FQDN) of the forest root domain (for example, example.com).
Domain NetBIOS name
EXAMPLE
NetBIOS name of the domain for users of earlier versions of Windows. This name can be up to 15 characters long.
Domain admin user
CrossRealmAdmin
User name for the account that is added as domain administrator. This account is separate from the default administrator account.
Domain admin password
Requires input
Password for the domain admin user. Must be at least eight characters including letters, numbers, and symbols.
Key pair name
Requires input
Name of an existing key pair, which enables you to connect securely to your instance after it launches.
Instance type
m4.xlarge
Instance type for the domain controller and the Amazon EMR cluster.
Allowed IP address
10.0.0.0/16
The client IP address can that can reach your cluster. Specify an IP address range in CIDR notation (for example, 203.0.113.5/32). By default, only the VPC CIDR (10.0.0.0/16) can reach the cluster. Be sure to add your client IP range so that you can connect to the cluster using SSH.
EMR Kerberos realm
EC2.INTERNAL
Cluster’s Kerberos realm name. By default, the realm name is derived from the cluster’s VPC domain name in uppercase letters (for example, EC2.INTERNAL is the default VPC domain name in the us-east-1 Region).
Trusted AD domain
EXAMPLE.COM
The Active Directory (AD) domain that you want to trust. This is the same as the “Active Directory domain.” However, it must use all uppercase letters (for example, EXAMPLE.COM).
Cross-realm trust password
Requires input
Password that you want to use for your cross-realm trust.
Instance count
2
The number of instances (core nodes) for the cluster.
EMR applications
Hadoop, Spark, Ganglia, Hive
Comma separated list of applications to install on the cluster.
After you specify the template details, choose Next. On the Options page, choose Next again. On the Review page, select the I acknowledge that AWS CloudFormation might create IAM resources with custom names check box, and then choose Create.
It takes approximately 45 minutes for the deployment to complete. When the stack launch is complete, it will return outputs with information about the resources that were created. Note the outputs and skip to the Managing and testing the solution section. You can view the stack outputs on the AWS Management Console or by using the following AWS CLI command:
This section describes how to use AWS CloudFormation templates to perform each step separately in the solution.
Create and configure an Amazon VPC
In order for you to establish a cross-realm trust between an Amazon EMR Kerberos realm and an Active Directory domain, your Amazon VPC must meet the following requirements:
The subnet used for the Amazon EMR cluster must have a CIDR block of fewer than nine digits (for example, 10.0.1.0/24).
Both DNS resolution and DNS hostnames must be enabled (set to “yes”).
The Active Directory domain controller must be the DNS server for instances in the Amazon VPC (this is configured in the next step).
To use the AWS CloudFormation template to create and configure an Amazon VPC with the prerequisites listed previously, choose Launch Stack:
Note: If you want to create the VPC manually (without using AWS CloudFormation), see Set Up the VPC and Subnet in the Amazon EMR documentation.
Launching this stack creates the following AWS resources:
Amazon VPC with CIDR block 10.0.0.0/16 (Name: CrossRealmVPC)
Internet Gateway (Name: CrossRealmGateway)
Public subnet with CIDR block 10.0.1.0/24 (Name: CrossRealmSubnet)
Security group allowing inbound access from the VPC’s subnets (Name tag: CrossRealmSecurityGroup)
When the stack launch is complete, it should return outputs similar to the following.
Key
Value example
Description
SubnetID
subnet-xxxxxxxx
The subnet for the Active Directory domain controller and the EMR cluster.
SecurityGroup
sg-xxxxxxxx
The security group for the Active Directory domain controller.
VPCID
vpc-xxxxxxxx
The Active Directory domain controller and EMR cluster will be launched on this VPC.
Note the outputs because they are used in the next step. You can view the stack outputs on the AWS Management Console or by using the following AWS CLI command:
Launch and configure an Active Directory domain controller
In this step, you use an AWS CloudFormation template to automatically launch and configure a new Active Directory domain controller and cross-realm trust.
Note: There are various ways to install and configure an Active Directory domain controller. For details on manually launching and installing a domain controller without AWS CloudFormation, see Step 2: Launch and Install the AD Domain Controller in the Amazon EMR documentation.
In addition to launching and configuring an Active Directory domain controller and cross-realm trust, this AWS CloudFormation template also sets the domain controller as the DNS server (name server) for your Amazon VPC. In other words, the template creates a new DHCP option-set for the VPC where it’s being deployed to, and it sets the private IP of the domain controller as the name server for that new DHCP option set.
IMPORTANT: You should not use this template on a production VPC with existing resources like Amazon EC2 instances. When you launch this stack, make sure that you use the new environment and resources (Amazon VPC, subnet, and security group) that were created in the Create and configure an Amazon VPC step.
To launch this stack, choose Launch Stack:
The following table contains information about the parameters available in this template. Review the parameters for the template and provide values for those that require input.
Parameter
Default
Description
VPC ID
Requires input
Launch the domain controller on this VPC (for example, use the VPC created in the Create and configure an Amazon VPC step).
Subnet ID
Requires input
Subnet used for the domain controller (for example, use the subnet created in the Create and configure an Amazon VPC step).
Security group ID
Requires input
Security group (SG) for the domain controller (for example, use the SG created in the Create and configure an Amazon VPC step).
Domain Controller name
DC1
NetBIOS name of the Active Directory server (up to 15 characters).
Active Directory domain
example.com
Fully qualified domain name (FQDN) of the forest root domain (for example, example.com).
Domain NetBIOS name
EXAMPLE
NetBIOS name of the domain for users of earlier versions of Windows. This name can be up to 15 characters long.
Domain admin user
CrossRealmAdmin
User name for the account that is added as domain administrator. This account is separate from the default administrator account.
Domain admin password
Requires input
Password for the domain admin user. Must be at least eight characters including letters, numbers, and symbols.
Key pair name
Requires input
Name of an existing EC2 key pair to enable access to the domain controller instance.
Instance type
m4.xlarge
Instance type for the domain controller.
EMR Kerberos realm
EC2.INTERNAL
Cluster’s Kerberos realm name. By default, the realm name is derived from the cluster’s VPC domain name in uppercase letters (for example, EC2.INTERNAL is the default VPC domain name in the us-east-1 Region).
Cross-realm trust password
Requires input
Password that you want to use for your cross-realm trust.
It takes 25–30 minutes for this stack to be created. When it’s complete, note the stack’s outputs, and then move to the next step: Launch an EMR cluster with Kerberos enabled.
Create a security configuration and launch an Amazon EMR cluster with Kerberos enabled
To launch a kerberized Amazon EMR cluster, you first need to create a security configuration containing the cross-realm trust configuration. You then specify cluster-specific Kerberos attributes when launching the cluster.
In this step, you use AWS CloudFormation to launch and configure a kerberized Amazon EMR cluster with a cross-realm trust. If you want to manually launch and configure a cluster with Kerberos enabled, see Step 6: Launch a Kerberized EMR Cluster in the Amazon EMR documentation.
Note: At the time of this writing, AWS CloudFormation does not yet support launching Amazon EMR clusters with Kerberos authentication enabled. To overcome this limitation, I created a template that uses an AWS Lambda-backed custom resource to launch and configure the Amazon EMR cluster with Kerberos enabled. If you use this template, there’s nothing else that you need to do. Just keep in mind that the template creates and invokes an AWS Lambda function (custom resource) to launch the cluster.
To create a cross-realm trust security configuration and launch a kerberized Amazon EMR cluster using AWS CloudFormation, choose Launch Stack:
The following table lists and describes the template parameters for deploying a kerberized Amazon EMR cluster and configuring a cross-realm trust.
Parameter
Default
Description
Active Directory domain
example.com
The Active Directory domain that you want to establish the cross-realm trust with.
Domain admin user (joiner user)
CrossRealmAdmin
The user name of an Active Directory domain user with privileges to join domains/computers to the Active Directory domain (joiner user).
Domain admin password
Requires input
Password of the joiner user.
Cross-realm trust password
Requires input
Password of your cross-realm trust.
EC2 key pair name
Requires input
Name of an existing key pair, which enables you to connect securely to your cluster after it launches.
Subnet ID
Requires input
Subnet that you want to use for your Amazon EMR cluster (for example, choose the subnet created in the Create and configure an Amazon VPC step).
Security group ID
Requires input
Security group that you want to use for your Amazon EMR cluster (for example, choose the security group created in the Create and configure an Amazon VPC step).
Instance type
m4.xlarge
The instance type that you want to use for the cluster nodes.
Instance count
2
The number of instances (core nodes) for the cluster.
Allowed IP address
10.0.0.0/16
The client IP address can that can reach your cluster. Specify an IP address range in CIDR notation (for example, 203.0.113.5/32). By default, only the VPC CIDR (10.0.0.0/16) can reach the cluster. Be sure to add your client IP range so that you can connect to the cluster using SSH.
EMR applications
Hadoop, Spark, Ganglia, Hive
Comma separated list of the applications that you want installed on the cluster.
EMR Kerberos realm
EC2.INTERNAL
Cluster’s Kerberos realm name. By default, the realm name is derived from the cluster’s VPC domain name in uppercase letters (for example, EC2.INTERNAL is the default VPC domain name in the us-east-1 Region).
Trusted AD domain
EXAMPLE.COM
The Active Directory domain that you want to trust. This name is the same as the “AD domain name.” However, it must use all uppercase letters (for example, EXAMPLE.COM).
It takes 10–15 minutes for this stack to be created. When it’s complete, note the stack’s outputs, and then move to the next section: Managing and testing the solution.
Managing and testing the solution
Now that you’ve configured and built the solution, it’s time to test it by connecting to a cluster using Active Directory credentials.
SSH to a cluster using Active Directory credentials (single sign-on)
After you launch a kerberized Amazon EMR cluster, if you used the AWS CloudFormation templates and added your client IP range to the Allowed IP address parameter, you should be able to connect to the cluster using an SSH client and your Active Directory user credentials. If you have trouble connecting to the cluster using SSH, check the cluster’s security group to make sure that it allows inbound SSH connection (TCP port 22) from your client’s IP address (source).
The following steps assume that you’re using a client such as OpenSSH. If you’re using a different SSH application (for example, PuTTY), consult the application-specific documentation.
Note: Because the cluster was launched with a cross-realm trust configuration, you don’t need to use a private key (.pem file) when you connect to it as a domain user using SSH.
To connect to your Amazon EMR cluster as an Active Directory user using SSH, run the following command. Replace ad_user with the domain admin user that you created while setting up the domain controller and replace master_node_URL with the cluster’s URL (see the stack’s outputs to find this information):
$ ssh -l <ad_user> <master_node_URL>
If your SSH client is configured to use a key as the preferred authentication method, the login might fail. If that’s the case, you can add the following options to your SSH command to force the SSH connection to use password authentication:
After a domain user connects to the cluster using SSH, if this is the first that the user is connecting, a local home directory is created for that user. In addition to creating a local home directory, if you used the create-hdfs-home-ba.sh bootstrap action when launching the cluster (done by default if you used the AWS CloudFormation template to launch a kerberized cluster), an HDFS user home directory is also automatically created.
Note: If you manually launched the cluster and did not use the create-hdfs-home-ba.sh bootstrap action, then you’ll need to manually create HDFS user home directories for your users.
When you connect to the cluster using SSH for the first time (as a domain user), you should see the following messages if the HDFS home directory for your domain user was successfully created:
Running jobs on a kerberized Amazon EMR cluster
To run a job on a kerberized cluster, the user submitting the job must first be authenticated. If you followed the previous section to connect to your cluster as an Active Directory user using SSH, the user should be authenticated automatically.
If running the klist command returns a “No credentials cache found” message, it means that the user is not authenticated (the user doesn’t have a Kerberos ticket). You can re-authenticate a user at any time by running the following command (be sure to use all uppercase letters for the Active Directory domain):
$ kinit <username>@<AD_DOMAIN>
When the user is authenticated, they can submit jobs just like they would on a non-kerberized cluster.
Auditing jobs
Another advantage that Kerberos can provide is that you can easily tell which user ran a particular job. For example, connect (using SSH) to a kerberized cluster with an Active Directory user, and submit the SparkPi sample application:
$ spark-example SparkPi
After running the SparkPi application, go to the Amazon EMR console and choose your cluster. Then choose the Application history tab. There you can see information about the application, including the user that submitted the job:
Common issues
Although it would be hard to cover every possible Kerberos issue, this section covers some of the more common issues that might occur and ways to fix them.
Issue 1: You can successfully connect and get authenticated on a cluster. However, whenever you try running job, it fails with an error similar to the following:
Solution: Make sure that an HDFS home directory for the user was created and that it has the right permissions.
Issue 2: You can successfully connect to the cluster, but you can’t run any Hadoop or HDFS commands.
Solution: Use the klist command to confirm whether the user is authenticated and has a valid Kerberos ticket. Use the kinit command to re-authenticate a user.
Issue 3: You can’t connect (using SSH) to the cluster using Active Directory user credentials, but you can manually authenticate the user with kinit.
Solution: Make sure that the Active Directory domain controller is the DNS server (name server) for the cluster nodes.
Cleaning up
After completing and testing this solution, remember to clean up the resources. If you used the AWS CloudFormation templates to create the resources, then use the AWS CloudFormation console or AWS CLI/SDK to delete the stacks. Deleting a stack also deletes the resources created by that stack.
If one of your stacks does not delete, make sure that there are no dependencies on the resources created by that stack. For example, if you deployed an Amazon VPC using AWS CloudFormation and then deployed a domain controller into that VPC using a different AWS CloudFormation stack, you must first delete the domain controller stack before the VPC stack can be deleted.
Summary
The ability to authenticate users and services with Kerberos not only allows you to secure your big data applications, but it also enables you to easily integrate Amazon EMR clusters with an Active Directory environment. This post showed how you can use Kerberos on Amazon EMR to create a single sign-on solution where Active Directory domain users can seamlessly access Amazon EMR clusters and run big data applications. We also showed how you can use AWS CloudFormation to automate the deployment of this solution.
Today, AWS introduced AWS Directory Service for Microsoft Active Directory (Standard Edition), also known as AWS Microsoft AD (Standard Edition), which is managed Microsoft Active Directory (AD) that is performance optimized for small and midsize businesses. AWS Microsoft AD (Standard Edition) offers you a highly available and cost-effective primary directory in the AWS Cloud that you can use to manage users, groups, and computers. It enables you to join Amazon EC2 instances to your domain easily and supports many AWS and third-party applications and services. It also can support most of the common use cases of small and midsize businesses. When you use AWS Microsoft AD (Standard Edition) as your primary directory, you can manage access and provide single sign-on (SSO) to cloud applications such as Microsoft Office 365. If you have an existing Microsoft AD directory, you can also use AWS Microsoft AD (Standard Edition) as a resource forest that contains primarily computers and groups, allowing you to migrate your AD-aware applications to the AWS Cloud while using existing on-premises AD credentials.
In this blog post, I help you get started by answering three main questions about AWS Microsoft AD (Standard Edition):
What do I get?
How can I use it?
What are the key features?
After answering these questions, I show how you can get started with creating and using your own AWS Microsoft AD (Standard Edition) directory.
1. What do I get?
When you create an AWS Microsoft AD (Standard Edition) directory, AWS deploys two Microsoft AD domain controllers powered by Microsoft Windows Server 2012 R2 in your Amazon Virtual Private Cloud (VPC). To help deliver high availability, the domain controllers run in different Availability Zones in the AWS Region of your choice.
As a managed service, AWS Microsoft AD (Standard Edition) configures directory replication, automates daily snapshots, and handles all patching and software updates. In addition, AWS Microsoft AD (Standard Edition) monitors and automatically recovers domain controllers in the event of a failure.
AWS Microsoft AD (Standard Edition) has been optimized as a primary directory for small and midsize businesses with the capacity to support approximately 5,000 employees. With 1 GB of directory object storage, AWS Microsoft AD (Standard Edition) has the capacity to store 30,000 or more total directory objects (users, groups, and computers). AWS Microsoft AD (Standard Edition) also gives you the option to add domain controllers to meet the specific performance demands of your applications. You also can use AWS Microsoft AD (Standard Edition) as a resource forest with a trust relationship to your on-premises directory.
2. How can I use it?
With AWS Microsoft AD (Standard Edition), you can share a single directory for multiple use cases. For example, you can share a directory to authenticate and authorize access for .NET applications, Amazon RDS for SQL Server with Windows Authentication enabled, and Amazon Chime for messaging and video conferencing.
The following diagram shows some of the use cases for your AWS Microsoft AD (Standard Edition) directory, including the ability to grant your users access to external cloud applications and allow your on-premises AD users to manage and have access to resources in the AWS Cloud. Click the diagram to see a larger version.
Use case 1: Sign in to AWS applications and services with AD credentials
You can enable multiple AWS applications and services such as the AWS Management Console, Amazon WorkSpaces, and Amazon RDS for SQL Server to use your AWS Microsoft AD (Standard Edition) directory. When you enable an AWS application or service in your directory, your users can access the application or service with their AD credentials.
For example, you can enable your users to sign in to the AWS Management Console with their AD credentials. To do this, you enable the AWS Management Console as an application in your directory, and then assign your AD users and groups to IAM roles. When your users sign in to the AWS Management Console, they assume an IAM role to manage AWS resources. This makes it easy for you to grant your users access to the AWS Management Console without needing to configure and manage a separate SAML infrastructure.
In addition, your users can sign in to your instances with their AD credentials. This eliminates the need to use individual instance credentials or distribute private key (PEM) files. This makes it easier for you to instantly grant or revoke access to users by using AD user administration tools you already use.
Use case 3: Provide directory services to your AD-aware workloads
Use case 4: SSO to Office 365 and other cloud applications
You can use AWS Microsoft AD (Standard Edition) to provide SSO for cloud applications. You can use Azure AD Connect to synchronize your users into Azure AD, and then use Active Directory Federation Services (AD FS) so that your users can access Microsoft Office 365 and other SAML 2.0 cloud applications by using their AD credentials.
Use case 5: Extend your on-premises AD to the AWS Cloud
If you already have an AD infrastructure and want to use it when migrating AD-aware workloads to the AWS Cloud, AWS Microsoft AD (Standard Edition) can help. You can use AD trusts to connect AWS Microsoft AD (Standard Edition) to your existing AD. This means your users can access AD-aware and AWS applications with their on-premises AD credentials, without needing you to synchronize users, groups, or passwords.
For example, your users can sign in to the AWS Management Console and Amazon WorkSpaces by using their existing AD user names and passwords. Also, when you use AD-aware applications such as SharePoint with AWS Microsoft AD (Standard Edition), your logged-in Windows users can access these applications without needing to enter credentials again.
3. What are the key features?
AWS Microsoft AD (Standard Edition) includes the features detailed in this section.
Extend your AD schema
With AWS Microsoft AD, you can run customized AD-integrated applications that require changes to your directory schema, which defines the structures of your directory. The schema is composed of object classes such as user objects, which contain attributes such as user names. AWS Microsoft AD lets you extend the schema by adding new AD attributes or object classes that are not present in the core AD attributes and classes.
For example, if you have a human resources application that uses employee badge color to assign specific benefits, you can extend the schema to include a badge color attribute in the user object class of your directory. To learn more, see How to Move More Custom Applications to the AWS Cloud with AWS Directory Service.
Create user-specific password policies
With user-specific password policies, you can apply specific restrictions and account lockout policies to different types of users in your AWS Microsoft AD (Standard Edition) domain. For example, you can enforce strong passwords and frequent password change policies for administrators, and use less-restrictive policies with moderate account lockout policies for general users.
Add domain controllers
You can increase the performance and redundancy of your directory by adding domain controllers. This can help improve application performance by enabling directory clients to load-balance their requests across a larger number of domain controllers.
Encrypt directory traffic
You can use AWS Microsoft AD (Standard Edition) to encrypt Lightweight Directory Access Protocol (LDAP) communication between your applications and your directory. By enabling LDAP over Secure Sockets Layer (SSL)/Transport Layer Security (TLS), also called LDAPS, you encrypt your LDAP communications end to end. This helps you to protect sensitive information you keep in your directory when it is accessed over untrusted networks.
Improve the security of signing in to AWS services by using multi-factor authentication (MFA)
You can improve the security of signing in to AWS services, such as Amazon WorkSpaces and Amazon QuickSight, by enabling MFA in your AWS Microsoft AD (Standard Edition) directory. With MFA, your users must enter a one-time passcode (OTP) in addition to their AD user names and passwords to access AWS applications and services you enable in AWS Microsoft AD (Standard Edition).
In this blog post, I explained what AWS Microsoft AD (Standard Edition) is and how you can use it. With a single directory, you can address many use cases for your business, making it easier to migrate and run your AD-aware workloads in the AWS Cloud, provide access to AWS applications and services, and connect to other cloud applications. To learn more about AWS Microsoft AD, see the Directory Service home page.
If you have comments about this post, submit them in the “Comments” section below. If you have questions about this blog post, start a new thread on the Directory Service forum.
Getting back up to speed, finishing getting my computer back how it was, etc. Also we got a SNES Classic and Stardew Valley so, those have been things. But between all that, I somehow found time to do a microscopic amount of actual work!
art: Sketched some stuff! It wasn’t very good. Need to do this more often.
fox flux: Finally, after a great many attempts, I drew a pixel art bush I’m fairly happy with. And yet, I can already see ways to improve it! But hey I’m learning stuff and that’s really cool. I’ve been working on a much larger pixel art forest background, too, which is proving a little harder to figure out.
You can now enable your users to access Microsoft Office 365 with credentials that you manage in AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD. You can accomplish this by deploying Microsoft Azure Active Directory (AD) Connect and Active Directory Federation Services for Windows Server 2016 (AD FS 2016) with AWS Microsoft AD. AWS Microsoft AD makes it possible and easy for you to build a Windows environment in the AWS Cloud, synchronize your AWS Microsoft AD users into Microsoft Azure AD, and use Office 365, all without needing to create and manage AD domain controllers. Now you can also benefit from the broad set of AWS Cloud services for compute, storage, database, and Internet of Things (IoT) while continuing to use Office 365 business productivity apps—all with a single AD domain.
Office 365 provides different options to support user authentication with identities that come from AD. One common way to do this is to use Azure AD Connect and AD FS together with your AD directory. In this model, you use Azure AD Connect to synchronize user names from AD into Azure AD so that Office 365 can use those identities. To complete this solution, you use AD FS to enable Office 365 to authenticate the identities against your AD directory. Good news: AWS Microsoft AD now supports this model!
In this blog post, we show how to use Azure AD Connect and AD FS with AWS Microsoft AD so that your employees can access Office 365 by using their AD credentials.
Join an Amazon EC2 for Windows Server instance to the AWS Microsoft AD domain you use as your ADSync server. We will show you how to install Azure AD Connect on this instance later.
Using Active Directory Users and Computers on your Management instance, create a standard user named ADFSSVC in your AWS Microsoft AD directory. AD FS uses this user account later.
Note: You must use RDP and sign in with the AWS Microsoft AD admin account using the password you specified when you created your AWS Microsoft AD directory when performing Steps 3 and 6 in this “Prerequisites” section.
The following diagram illustrates the environment you must have in place to implement the solution in this blog post (the numbers in the diagram correspond to Steps 1–8 earlier in this section). We build on this configuration to install and configure Azure AD Connect and AD FS with Azure AD and Office 365.
Note: In this blog post, we use separate Microsoft Windows Server instances on which to run AD FS and Azure AD Connect. You can choose to combine these on a single server, as long as you use Windows Server 2016. Though it is technically possible to use an on-premises server as the AD FS and Azure AD host, such a configuration is counter to the idea of a Windows environment completely in the cloud. Also, this requires configuration of firewall ports and AWS security groups, which is beyond the scope of this blog.
Configuration background
When you create an AWS Microsoft AD directory, AWS exclusively retains the enterprise administrator account of the forest and domain administrator account for the root domain to deliver the directory as a managed service. When you set up your directory, AWS creates an organizational unit (OU) in the directory and delegates administrative privileges for the OU to your admin account. Within this OU, you administer users, groups, computers, Group Policy objects, other devices, and additional OUs as needed. You perform these actions using standard AD administration tools from a computer that is joined to an AWS Microsoft AD domain. Typically, the administration computer is an EC2 instance that you access using RDP, by logging in with your admin account credentials. From your admin account, you can also delegate permissions to other users or groups you create within your OU.
To use Office 365 with AD identities, you use Azure AD Connect to synchronize the AD identities into Azure AD. There are two commonly supported ways to use Azure AD Connect to support Office 365 use. In one model, you synchronize user names only, and you use AD FS to federate authentication from Office 365 to your AD. In the second model, you synchronize user names and passwords from your AD directory to Azure AD, and you do not have to use AD FS. The model supported by AWS Microsoft AD is the first model: synchronize user names only and use AD FS to authenticate from Office 365 to your AWS Microsoft AD. The AD FS model also enables authentication with SaaS applications that support federated authentication (this topic is beyond the scope of this blog post).
Note: Azure AD Connect now has a pass-through model of authentication. Because this was in a preview status at the time of writing this blog post, this authentication model is beyond the scope of this blog post.
In a default AD FS installation, AD FS uses two containers that require special AD permissions that your AWS Microsoft AD administrative account does not have. To address this, you will create two nested containers in your OU for AD FS to use. When you install AD FS, you tell AD FS where to find the containers through a Windows PowerShell parameter.
As described previously, we will now show you how to use Azure AD Connect and AD FS with AWS Microsoft AD with Azure AD and Office 365 in five steps, as illustrated in the following diagram.
Add two containers to AWS Microsoft AD for use by AD FS.
Install AD FS.
Integrate AD FS with Azure AD.
Synchronize users from AWS Microsoft AD to Azure AD with Azure AD Connect.
Sign in to Office 365 by using your Microsoft AD identities.
Step 1: Add two containers to AWS Microsoft AD for use by AD FS
The following steps show how to create the AD containers required by AD FS in your AWS Microsoft AD directory.
From the Management instance:
Generate a random global unique identifier (GUID) using the following Windows PowerShell command.
(New-Guid).Guid
Make a note of the GUID output because it will be required later on. In this case, the GUID is 67734c62-0805-4274-b72b-f7171110cd56.
Create a container named ADFS in your OU. The OU is located in the domain root and it has the same name as the NetBIOS name you specified when you created your AWS Microsoft AD directory. In this example, our OU name is AWS, and our domain is DC=awsexample,DC=com. You create the container by running the following Windows PowerShell command. You must replace the names that are in bold text with the names from your AWS Microsoft AD directory.
Create another AD container in your new ADFS container, and use the previously generated GUID as the name. Do this by running the following Windows PowerShell command. Be sure to replace the names in bold text with the names from your AWS Microsoft AD directory and your GUID. In this example, we replace GUID with 67734c62-0805-4274-b72b-f7171110cd56. The other bold items shown match the names in our example AWS Microsoft AD directory.
To verify that you successfully created the ADFS and GUID containers, open Active Directory Users and Computers and navigate to the containers you created. Your root domain, OU name, and GUID name should match your AWS Microsoft AD configuration.
Note: If you do not see the ADFS and GUID containers, turn on Advanced Features by choosing View in the Active Directory Users and Computers tool, and then choosing Advanced Features.
Step 2: Install AD FS
In this section, we show how to install AD FS by using Windows PowerShell commands. First, though, select a federation service name for your AD FS server. You can create your federation service name by adding a short name (for example, sts) followed by your domain name (for example, awsexample.com). In this example, we use sts.awsexample.com as the federation service name.
Using your AWS Microsoft AD admin account, open an RDP session to your ADFS instance, run Windows PowerShell as a local administrator, and complete the following steps:
Install the Windows feature, AD FS, by running the following Windows PowerShell command. This command only adds the components needed to install your ADFS server later.
Install-WindowsFeature ADFS-Federation
Now that you have installed AD FS, you must obtain a certificate for use when you configure your ADFS server. The AD FS certificate plays an important role to secure communication between the ADFS server and clients, and to ensure tokens issued by the ADFS server are secured. AWS recommends that you use a certificate from a trusted Certificate Authority (CA).
In our example, we use the SSL certificate, sts.awsexample.com. It is important to note that the common name and subject alternative name (SAN) must include the federation service name we plan to use for the AD FS server. In our example, the name is sts.awsexample.com.
Choose File, choose Add/Remove snap-in, and choose Add.
For Add StandaloneSnap-in, choose Certificates and then choose Add.
For the Certificates snap-in, choose Computer account and then choose Next.
Choose Finish, and then choose OK to load the Certificates snap In.
Expand Certificates (Local Computer).
Right-click Personal, choose All Tasks, and then choose Import.
On the Certificate Import Wizard, choose Next.
Choose Browse to locate and select your certificate that has been given by your CA. Choose Next.
Ensure Certificate store is set to Personal, and choose Next.
Choose Finish and OK to complete the installation of the certificate on the AD FS server.
Next you need to retrieve the Thumbprint value of the newly installed certificate and save it for use when you configure your ADFS server. Follow the remaining steps:
In the Certificates console window, expand Personal, and choose Certificates.
Right-click the certificate, and then choose Open.
Choose the Details tab to locate the Thumbprint
Note: In this case, we will copy our certificate Thumbprint, d096652327cfa18487723ff61040c85c7f57f701, and save it in Windows Notepad.
Open an RDP session to your ADFS server by using the admin account for your AWS Microsoft AD directory. Install AD FS by running the following Windows PowerShell command. You must replace the bold strings in the command with the GUID you created in Step 1 and the names from your AWS Microsoft AD directory.
Enter the AD FS standard user account credentials for the ADFSSVC user and save it in the script variable, $svcCred, by running the following Windows PowerShell command.
$svcCred = (get-credential)
Type the Microsoft AD administrator credentials of the Admin user and save it in the script variable, $localAdminCred, by running the following Windows PowerShell command.
$localAdminCred = (get-credential)
Install the AD FS server by running the following Windows PowerShell command. You must replace the bold items with the Thumbprint ID from your certificate, and replace the federation service name with the federation service name you chose earlier. For our example, the federation service name is awsexample.com and we copy our certificate Thumbprint, d096652327cfa18487723ff61040c85c7f57f701, from where we saved it in Windows Notepad.
Note: Be sure to remove any empty spaces in the certificate Thumbprint value.
Create a DNS A record for use with AD FS. This record resolves the federation service name to the public IP address you assign to your ADFS instance. You must create the DNS A record at the DNS hosting provider that hosts your domain. In the following example, sts.awsexample.com is the federation service name and 54.x.x.x is the public IP address of our AD FS instance.
Hostname: awsexample.com
Record Type:A
IP Address:x.x.x
Enable the AD FS sign-in page by running the following Windows PowerShell command.
To verify that the AD FS sign-in page works, open a browser on the AD FS instance, and sign in on the AD FS sign-in page (https://<myfederation service name>/AD FS/ls/IdpInitiatedSignOn.aspx) by using your AWS Microsoft AD admin account. In our example, the federation service name (<my federation service name> in the sign-in page URL) is sts.awsexample.com.
Step 3: Integrate AD FS with Azure AD
The following steps show you how to connect AD FS with Office 365 by connecting to Azure AD with Windows PowerShell and federating the custom domain.From the ADFS instance, make sure you run Windows PowerShell as a local administrator and complete the following steps:
Connect to Azure AD using Windows PowerShell. Federate the custom domain you added and verified in Azure AD by running the following two Windows PowerShell commands. You must update the items in bold text with the names from your AWS Microsoft AD directory. For our example, our AD FS instance’s Fully Qualified Domain Name (FQDN) is adfsserver.awsexample.com, and our domain name is awsexample.com.
Step 4: Synchronize users from AWS Microsoft AD to Azure AD with Azure AD Connect
The following steps show you how to install and customize Azure AD Connect to synchronize your AWS Microsoft AD identities to Azure AD for use with Office 365.Open an RDP session to your ADSync instance by using your AWS Microsoft AD admin user account:
On the Welcome page of the Azure AD Connect Wizard, accept the license terms and privacy notice, and then choose Continue.
On the Express Settings page, choose Customize.
On the Install required components page, choose Install.
On the User sign-in page, choose Do not configure and then choose Next.
On the Connect to Azure AD page, enter your Office 365 global administrator account credentials and then choose Next.
On the Connect your directories page, choose Active Directory as the Directory Type, and then choose your Microsoft AD Forest as your Forest. Choose Add Directory.
At the prompt, enter your AWS Microsoft AD admin account credentials, and then choose OK.
Now that you have added the AWS Microsoft AD directory, choose Next.
On the Azure AD sign-in configuration page, choose Next.
Note: AWS recommends the userPrincipalName (UPN) attribute for use by AWS Microsoft AD users when they sign in to Azure AD and Office 365. The UPN attribute format combines the user’s login name and the UPN-suffix of an AWS Microsoft AD user. The UPN suffix is the domain name of your AWS Microsoft AD domain and the same domain name you added and verified with Azure AD.
In the following example from the Active Directory Users and Computers tool, the user’s UPN is [email protected], which is a combination of the user’s login name, awsuser, with the UPN-suffix, @awsexample.com.
On the Domain and OU filtering page, choose Sync selected domains and OUs, choose the Users OU under your NetBIOS OU, and then choose Next.
On the Uniquely identifying your users page, choose Next.
On the Filter users and devices page, choose Next.
On the Optional features page, choose Next.
On the Ready to configure page, choose Start the synchronization process when configuration completes, and then choose Install.
The Azure AD Connect installation has now completed. Choose Exit.
Note: By default, the Azure AD Connect sync scheduler runs every 30 minutes to synchronize your AWS Microsoft AD identities to Azure AD. You can tune the scheduler by opening a Windows PowerShell session as an administrator and running the appropriate Windows PowerShell commands. For more information, go to Azure AD Connect Sync Scheduler.
Tip: Do you need to synchronize a change immediately? You can manually start a sync cycle outside the scheduled sync cycle from the Azure AD Connect sync instance. Open a Windows PowerShell session as an administrator and run the following Windows PowerShell commands.
Step 5: Sign in to Office 365 by using your AWS Microsoft AD identities
The following steps show you how to sign in to Office 365 using AD FS as the authentication method with your AWS Microsoft AD user account. In this example, we assign a license to the AWS Microsoft AD user account, [email protected], in the Office 365 admin center. We then sign in to Office 365 by using the AWS Microsoft AD user account UPN, [email protected].
Using a computer on the internet, open a browser and complete the following steps:
Sign in with the AWS Microsoft AD user account at https://portal.office.com. When entering the UPN of the AWS Microsoft AD user account, you will be redirected to your ADFS server sign-in page to complete user authentication.
On the AD FS sign-in page, enter your UPN and the password of the AWS Microsoft AD user account.
You have successfully signed in to Office 365 using your AWS Microsoft AD user account!
Summary
In this blog post, we showed how to use Azure AD Connect and AD FS with AWS Microsoft AD so that your employees can access Office 365 using their AD credentials. Now that you have Azure AD Connect and AD FS in place, you also might want to explore how to build upon this infrastructure to add sign-in for other Software as a Service (SaaS) applications that are compatible with AD FS. For example, this blog post explains how you can provide your users single sign-on access to Amazon AppStream by using AD FS.
Want to provide users with single sign-on access to AppStream 2.0 using existing enterprise credentials? Active Directory Federation Services (AD FS) 3.0 can be used to provide single sign-on for Amazon AppStream 2.0 using SAML 2.0.
You can use your existing Active Directory or any SAML 2.0–compliant identity service to set up single sign-on access of AppStream 2.0 applications for your users. Identity federation using SAML 2.0 is currently available in all AppStream 2.0 regions.
This post explains how to configure federated identities for AppStream 2.0 using AD FS 3.0.
Walkthrough
After setting up SAML 2.0 federation for AppStream 2.0, users can browse to a specially crafted (AD FS RelayState) URL and be taken directly to their AppStream 2.0 applications.
When users sign in with this URL, they are authenticated against Active Directory. After they are authenticated, the browser receives a SAML assertion as an authentication response from AD FS, which is then posted by the browser to the AWS sign-in SAML endpoint. Temporary security credentials are issued after the assertion and the embedded attributes are validated. The temporary credentials are then used to create the sign-in URL. The user is redirected to the AppStream 2.0 streaming session. The following diagram shows the process.
The user browses to https://applications.exampleco.com. The sign-on page requests authentication for the user.
The federation service requests authentication from the organization’s identity store.
The identity store authenticates the user and returns the authentication response to the federation service.
On successful authentication, the federation service posts the SAML assertion to the user’s browser.
The user’s browser posts the SAML assertion to the AWS Sign-In SAML endpoint (https://signin.aws.amazon.com/saml). AWS Sign-In receives the SAML request, processes the request, authenticates the user, and forwards the authentication token to the AppStream 2.0 service.
Using the authentication token from AWS, AppStream 2.0 authorizes the user and presents applications to the browser.
In this post, use domain.local as the name of the Active Directory domain. Here are the steps in this walkthrough:
Configure AppStream 2.0 identity federation.
Configure the relying trust.
Create claim rules.
Enable RelayState and forms authentication.
Create the AppStream 2.0 RelayState URL and access the stack.
Test the configuration.
Prerequisites
This walkthrough assumes that you have the following prerequisites:
An instance joined to a domain with the “Active Directory Federation Services” role installed and post-deployment configuration completed
Familiarity with AppStream 2.0 resources
Configure AppStream 2.0 identity federation
First, create an AppStream 2.0 stack, as you reference the stack in upcoming steps. Name the stack ExampleStack. For this walkthrough, it doesn’t matter which underlying fleet you associate with the stack. You can create a fleet using one of the example Amazon-AppStream2-Sample-Image images available, or associate an existing fleet to the stack.
Get the AD FS metadata file
The first thing you need is the metadata file from your AD FS server. The metadata file is a signed document that is used later in this guide to establish the relying party trust. Don’t edit or reformat this file.
To download and save this file, navigate to the following location, replacing <FQDN_ADFS_SERVER> with your AD FS s fully qualified server name.
In the IAM console, choose Identity providers, Create provider.
On the Configure Provider page, for Provider Type, choose SAML. For Provider Name, type ADFS01 or similar name. Choose Choose File to upload the metadata document previously downloaded. Choose Next Step.
Verify the provider information and choose Create.
You need the Amazon Resource Name (ARN) of the identity provider (IdP) to configure claims rules later in this walkthrough. To get this, select the IdP that you just created. On the summary page, copy the value for Provider ARN. The ARN is in the following format:
Next, configure a policy with permissions to the AppStream 2.0 stack. This is the level of permissions that federated users have within AWS.
In the IAM console, choose Policies, Create Policy, Create Your Own Policy.
For Policy Name, enter a descriptive name. For Description, enter the level of permissions. For Policy Document, you customize the Region-Code, AccountID (without hyphens), and case-sensitive Stack-Name values.
For Region Codes, use one of the following values based on the region you are using AppStream 2.0 (the available regions for AppStream 2.0):
us-east-1
us-west-2
eu-west-1
ap-northeast-1
Choose Create Policy and you should see the following notification:
Create an IAM role
Here, you create a role that relates to an Active Directory group assigned to your AppStream 2.0 federated users. For this configuration, Active Directory groups and AWS roles are case-sensitive. Here you create an IAM Role named “ExampleStack” and an Active Directory group named in the format AWS-AccountNumber-RoleName, for example AWS-012345678910-ExampleStack.
In the IAM console, choose Roles, Create new role.
On the Select Role type page, choose Role for identity provider access. Choose Select next to Grant Web Single Sign-On (WebSSO) access to SAML providers.
On the Establish Trust page, make sure that the SAML provider that you just created (such as ADFS01) is selected. For Attribute and Value, keep the default values.
On the Verify Role Trust page, the Federated value matches the ARN noted previously for the principal IdP created earlier. The SAML: aud value equals https://signin.aws.amazon.com/saml, as shown below. This is prepopulated and does not require any change. Choose Next Step.
On the Attach policy page, attach the policy that you created earlier granting federated users access only to the AppStream 2.0 stack. In this walkthrough, the policy was named AppStream2_ExampleStack.
After selecting the correct policy, choose Next Step.
On the Set role name and review page, name the role ExampleStack. You can customize this naming convention, as I explain later when I create the claims rules.
You can describe the role as desired. Ensure that the trusted entities match the AD FS IdP ARN, and that the policy attached is the policy created earlier granting access only to this stack.
Choose Create Role.
Important: If you grant more than the stack permissions to federated users, you can give them access to other areas of the console as well. AWS strongly recommends that you attach policies to a role that grants access only to the resources to be shared with federated users.
For example, if you attach the AdministratorAccess policy instead of AppStream2_ExampleStack, any AppStream 2.0 federated user in the ExampleStack Active Directory group has AdministratorAccess in your AWS account. Even though AD FS routes users to the stack, users can still navigate to other areas of the console, using deep links that go directly to specific console locations.
Next, create the Active Directory group in the format AWS-AccountNumber-RoleName using the “ExampleStack” role name that you just created. You reference this Active Directory group in the AD FS claim rules later using regex. For Group scope, choose Global. For Group type, choose Security
Note: To follow this walkthrough exactly, name your Active Directory group in the format “AWS-AccountNumber-ExampleStack” replacing AccountNumber with your AWS AccountID (without hyphens). For example:
AWS-012345678910-ExampleStack
Configure the relying party trust
In this section, you configure AD FS 3.0 to communicate with the configurations made in AWS.
Open the AD FS console on your AD FS 3.0 server.
Open the context (right-click) menu for AD FS and choose Add Relying Party Trust…
On the Welcome page, choose Start. On the Select Data Source page, keep Import data about the relying party published online or on a local network checked. For Federation metadata address (host name or URL), type the following link to the SAML metadata to describe AWS as a relying party and then choose Next.
On the Specify Display Name page, for Display name, type “AppStream 2.0 – ExampleStack” or similar value. For Notes, provide a description. Choose Next.
On the Configure Multi-factor Authentication Now? page, choose I do not want to configure multi-factor authentication settings for this relying party trust at this time. Choose Next.
Because you are controlling access to the stack using an Active Directory group, and IAM role with an attached policy, on the Choose Issuance Authorization Rules page, check Permit all users to access this relying party. Choose Next.
On the Ready to Add Trust page, there shouldn’t be any changes needed to be made. Choose Next.
On the Finish page, clear Open the edit Claim Rules dialog for this relying party trust when the wizard closes. You open this later.
Next, you add the https://signin.aws.amazon.com/saml URL is listed on the Identifiers tab within the properties of the trust. To do this, open the context (right-click) menu for the relying party trust that you just created and choose Properties.
On the Monitoring tab and clear Monitor relying party. Choose Apply. On the Identifiers tab, for Relying party identifier, add https://signin.aws.amazon.com/saml and choose OK.
Create claim rules
In this section, you create four AD FS claim rules, which identify accounts, set LDAP attributes, get the Active Directory groups, and match them to the role created earlier.
In the AD FS console, expand Trust Relationships, choose Relying Party Trusts, and then select the relying party trust that you just created (in this case, the display name is AppStream 2.0 – ExampleStack). Open the context (right-click) menu for the relying party trust and choose Edit Claim Rules. Choose Add Rule.
Rule 1: Name ID
This claim rule tells AD FS the type of expected incoming claim and how to send the claim to AWS. AD FS receives the UPN and tags it as the Name ID when it’s forwarded to AWS. This rule interacts with the third rule, which fetches the user groups.
Claim rule template: Transform an Incoming Claim
Configure Claim Rule values:
Claim Rule Name: Name ID
Incoming Claim Type: UPN
Outgoing Claim Type: Name ID
Outgoing name ID format: Persistent Identifier
Pass through all claim values: selected
Rule 2: RoleSessionName
This rule sets a unique identifier for the user. In this case, use the E-Mail-Addresses values.
Claim rule template: Send LDAP Attributes as Claims
This rule converts the value of the Active Directory group starting with AWS-AccountNumber prefix to the roles known by AWS. For this rule, you need the AWS IdP ARN that you noted earlier. If your IdP in AWS was named ADFS01 and the AccountID was 012345678910, the ARN would look like the following:
arn:aws:iam::012345678910:saml-provider/ADFS01
Claim rule template: Send Claims Using a Custom Rule
Configure Claim Rule values:
Claim Rule Name: Roles
Custom Rule:
c:[Type == "http://temp/variable", Value =~ "(?i)^AWS-"]
=> issue(Type = "https://aws.amazon.com/SAML/Attributes/Role", Value = RegExReplace(c.Value, "AWS-012345678910-", "arn:aws:iam::012345678910:saml-provider/ADFS01,arn:aws:iam::019517892450:role/"));
Change arn:aws:iam::012345678910:saml-provider/ADFS01 to the ARN of your AWS IdP
Change 012345678910 to the ID (without hyphens) of the AWS account.
In this walkthrough, “AWS-” returns the Active Directory groups that start with the AWS- prefix, then removes AWS-012345678910- leaving ExampleStack left on the Active Directory Group name to match the ExampleStack IAM role. To customize the role naming convention, for example to name the IAM Role ADFS-ExampleStack, add “ADFS-” to the end of the role ARN at the end of the rule: arn:aws:iam::012345678910:role/ADFS-.
You should now have four claims rules created:
NameID
RoleSessionName
Get Active Directory Groups
Role
Enable RelayState and forms authentication
By default, AD FS 3.0 doesn’t have RelayState enabled. AppStream 2.0 uses RelayState to direct users to your AppStream 2.0 stack.
On your AD FS server, open the following with elevated (administrator) permissions:
In the Microsoft.IdentityServer.Servicehost.exe.config file, find the section <microsoft.identityServer.web>. Within this section, add the following line:
In the AD FS console, verify that forms authentication is enabled. Choose Authentication Policies. Under Primary Authentication, for Global Settings, choose Edit.
For Extranet, choose Forms Authentication. For Intranet, do the same and choose OK.
On the AD FS server, from an elevated (administrator) command prompt, run the following commands sequentially to stop, then start the AD FS service to register the changes:
net stop adfssrv
net start adfssrv
Create the AppStream 2.0 RelayState URL and access the stack
Now that RelayState is enabled, you can generate the URL.
I have created an Excel spreadsheet for RelayState URL generation, available as RelayGenerator.xlsx. This spreadsheet only requires the fully qualified domain name for your AD FS server, account ID (without hyphens), stack name (case-sensitive), and the AppStream 2.0 region. After all the inputs are entered, the spreadsheet generates a URL in the blue box, as shown in the screenshot below. Copy the entire contents of the blue box to retrieve the generated RelayState URL for AD FS.
Alternatively, if you do not have Excel, there are third-party tools for RelayState URL generation. However, they do require some customization to work with AppStream 2.0. Example customization steps for one such tool are provided below.
CodePlex has an AD FS RelayState generator, which downloads an HTML file locally that you can use to create the RelayState URL. The generator says it’s for AD FS 2.0; however, it also works for AD FS 3.0. You can generate the RelayState URL manually but if the syntax or case sensitivity is incorrect even slightly, it won’t work. I recommend using the tool to ensure a valid URL.
When you open the URL generator, clear out the default text fields. You see a tool that looks like the following:
To generate the values, you need three pieces of information:
IDP URL String
Relying Party Identifier
Relay State / Target App
IDP URL String
The IDP URL string is the URL you use to hit your AD FS sign-on page. For example:
Ultimately, the URL looks like the following example, which is for us-east-1, with a stack name of ExampleStack, and an account ID of 012345678910. The stack name is case-sensitive.
The generated RelayState URL can now be saved and used by users to log in directly from anywhere that can reach the AD FS server, using their existing domain credentials. After they are authenticated, users are directed seamlessly to the AppStream 2.0 stack.
Test the configuration
Create a new AD user in Domain.local named Test User, with a username TUser and an email address. An email address is required based on the claim rules.
Next, add TUser to the AD group you created for the AWS-012345678910-ExampleStack stack.
Next, navigate to the RelayState URL and log in with domain\TUser.
After you log in, you are directed to the streaming session for the ExampleStack stack. As an administrator, you can disassociate and associate different fleets of applications to this stack, without impacting federation, and deliver different applications to this group of federated users.
Because the policy attached to the role only allows access to this AppStream 2.0 stack, if a federated user were to try to access another section of the console, such as Amazon EC2, they would discover that they are not authorized to see (describe) any resources or perform any actions, as shown in the screenshot below. This is why it’s important to grant access only to the AppStream 2.0 stack.
Configurations for AD FS 4.0
If you are using AD FS 4.0, there are a few differences from the procedures discussed earlier.
Do not customize the following file as described in the Enable RelayState and forms authentication of the AD FS 3.0 guide:
Enable the IdP-initiated sign-on page that is used when generating the RelayState URL. To do this, open an elevated PowerShell terminal and run the following command:
To register these changes with AD FS, restart the AD FS service from an elevated PowerShell terminal (or command prompt):
net stop adfssrv
net start adfssrv
After these changes are made, AD FS 4.0 should now work for AppStream 2.0 identity federation.
Troubleshooting
If you are still encountering errors with your setup, below are common error messages you may see, and configuration areas that I recommend that you check.
Invalid policy
Unable to authorize the session. (Error Code: INVALID_AUTH_POLICY);Status Code:401
This error message can occur when the IAM policy does not permit access to the AppStream 2.0 stack. However, it can also occur when the stack name is not entered into the policy or RelayState URL using case-sensitive characters. For example, if your stack name is “ExampleStack” in AppStream 2.0 and the policy has “examplestack” or if the Relay State URL has “examplestack” or any capitalization pattern other than the exact stack name, you see this error message.
Invalid relay state
Error: Bad Request.(Error Code: INVALID_RELAY_STATE);Status Code:400
If you are receiving this error message, there is likely to be another issue in the Relay State URL. It could be related to case sensitivity (other than the stack name). For example, https://relay-state-region-endoint?stack=stackname&accountId=aws-account-id-without-hyphens.
Unable to authorize the session. Cross account access is not allowed. (Error Code: CROSS_ACCOUNT_ACCESS_NOT_ALLOWED);Status Code:401
If you see this error message, check to make sure that the AccountId number is correct in the Relay State URL.
Summary
In this post, you walked through enabling AD FS 3.0 for AppStream 2.0 identity federation. You should now be able to configure AD FS 3.0 or 4.0 for AppStream 2.0 identity federation. If you have questions or suggestions, please comment below.
Nothing too special about this week; it went a little slow, but that’s been nice after the mad panic I was in at the end of July.
cc: I’m getting the hang of Unity and forming an uneasy truce with C#. Mostly did refactoring of some existing actor code, trying to move all the reading of controls to a single place so the rest of it can be reused for non-players.
fox flux: I put some work into a new forest background, which is already just… hilariously better than the one from the original game. Complex textures like leaves are one of my serious weak points, but this is forcing me to do it anyway and I’m slowly learning.
blog: I finished that post on Pokémon datamining, which ended up extraordinarily long and slightly late.
veekun: Dug into some missing stuff regarding items.
art: Spent a day or two doodling.
Still behind by one blog post (oops), and slacked on veekun a bit, but I’ve still got momentum.
You can now provision and manage resources for Amazon Kinesis Analytics applications using AWS CloudFormation. Kinesis Analytics is the easiest way to process streaming data in real time with standard SQL, without having to learn new programming languages or processing frameworks. Kinesis Analytics enables you to query streaming data or build entire streaming applications using SQL. Using the service, you gain actionable insights and can respond to your business and customer needs promptly.
Customers can create CloudFormation templates that easily create or update Kinesis Analytics applications. Typically, a template is used as a way to manage code across different environments, or to prototype a new streaming data solution quickly.
We have created two sample templates using past AWS Big Data Blog posts that referenced Kinesis Analytics.
Implement Serverless Log Analytics Using Amazon Kinesis Analytics – This post shows how to analyze Apache Logs using Kinesis Analytics and publish aggregated data to Amazon CloudWatch. Logs are ingested using the Kinesis agent and analyzed in near real time using Kinesis Analytics by joining static data from Amazon S3. In turn, they are published using AWS Lambda. CloudFormation template: aws-blog-serverless-log-analytics.
Nowadays, streaming data is seen and used everywhere—from social networks, to mobile and web applications, IoT devices, instrumentation in data centers, and many other sources. As the speed and volume of this type of data increases, the need to perform data analysis in real time with machine learning algorithms and extract a deeper understanding from the data becomes ever more important. For example, you might want a continuous monitoring system to detect sentiment changes in a social media feed so that you can react to the sentiment in near real time.
In this post, we use Amazon Kinesis Streams to collect and store streaming data. We then use Amazon Kinesis Analytics to process and analyze the streaming data continuously. Specifically, we use the Kinesis Analytics built-in RANDOM_CUT_FOREST function, a machine learning algorithm, to detect anomalies in the streaming data. Finally, we use Amazon Kinesis Firehose to export the anomalies data to Amazon Elasticsearch Service (Amazon ES). We then build a simple dashboard in the open source tool Kibana to visualize the result.
Solution overview
The following diagram depicts a high-level overview of this solution.
Amazon Kinesis Streams
You can use Amazon Kinesis Streams to build your own streaming application. This application can process and analyze streaming data by continuously capturing and storing terabytes of data per hour from hundreds of thousands of sources.
Amazon Kinesis Analytics
Kinesis Analytics provides an easy and familiar standard SQL language to analyze streaming data in real time. One of its most powerful features is that there are no new languages, processing frameworks, or complex machine learning algorithms that you need to learn.
Amazon Kinesis Firehose
Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service.
Amazon Elasticsearch Service
Amazon ES is a fully managed service that makes it easy to deploy, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more.
Solution summary
The following is a quick walkthrough of the solution that’s presented in the diagram:
IoT sensors send streaming data into Kinesis Streams. In this post, you use a Python script to simulate an IoT temperature sensor device that sends the streaming data.
By using the built-in RANDOM_CUT_FOREST function in Kinesis Analytics, you can detect anomalies in real time with the sensor data that is stored in Kinesis Streams. RANDOM_CUT_FOREST is also an appropriate algorithm for many other kinds of anomaly-detection use cases—for example, the media sentiment example mentioned earlier in this post.
The processed anomaly data is then loaded into the Kinesis Firehose delivery stream.
By using the built-in integration that Kinesis Firehose has with Amazon ES, you can easily export the processed anomaly data into the service and visualize it with Kibana.
Implementation steps
The following sections walk through the implementation steps in detail.
Creating the delivery stream
Open the Amazon Kinesis Streams console.
Create a new Kinesis stream. Give it a name that indicates it’s for raw incoming stream data—for example, RawStreamData. For Number of shards, type 1.
The Python code provided below simulates a streaming application, such as an IoT device, and generates random data and anomalies into a Kinesis stream. The code generates two temperature ranges, where the first range is the hypothetical sensor’s normal operating temperature range (10–20), and the second is the anomaly temperature range (100–120).Make sure to change the stream name on line 16 and 20 and the Region on line 6 to match your configuration. Alternatively, you can download the Amazon Kinesis Data Generator from this repository and use it to generate the data.
import json
import datetime
import random
import testdata
from boto import kinesis
kinesis = kinesis.connect_to_region("us-east-1")
def getData(iotName, lowVal, highVal):
data = {}
data["iotName"] = iotName
data["iotValue"] = random.randint(lowVal, highVal)
return data
while 1:
rnd = random.random()
if (rnd < 0.01):
data = json.dumps(getData("DemoSensor", 100, 120))
kinesis.put_record("RawStreamData", data, "DemoSensor")
print '***************************** anomaly ************************* ' + data
else:
data = json.dumps(getData("DemoSensor", 10, 20))
kinesis.put_record("RawStreamData", data, "DemoSensor")
print data
Open the Amazon Elasticsearch Service console and create a new domain.
Give the domain a unique name. In the Configure cluster screen, use the default settings.
In the Set up access policy screen, in the Set the domain access policy list, choose Allow access to the domain from specific IP(s).
Enter the public IP address of your computer. Note: If you’re working behind a proxy or firewall, see the “Use a proxy to simplify request signing” section in this AWS Database blog post to learn how to work with a proxy. For additional information about securing access to your Amazon ES domain, see How to Control Access to Your Amazon Elasticsearch Domain in the AWS Security Blog.
After the Amazon ES domain is up and running, you can set up and configure Kinesis Firehose to export results to Amazon ES:
Open the Amazon Kinesis Firehose console and choose Create Delivery Stream.
In the Destination dropdown list, choose Amazon Elasticsearch Service.
Type a stream name, and choose the Amazon ES domain that you created in Step 4.
Provide an index name and ES type. In the S3 bucket dropdown list, choose Create New S3 bucket. Choose Next.
In the configuration, change the Elasticsearch Buffer size to 1 MB and the Buffer interval to 60s. Use the default settings for all other fields. This shortens the time for the data to reach the ES cluster.
Under IAM Role, choose Create/Update existing IAM role. The best practice is to create a new role every time. Otherwise, the console keeps adding policy documents to the same role. Eventually the size of the attached policies causes IAM to reject the role, but it does it in a non-obvious way, where the console basically quits functioning.
Choose Next to move to the Review page.
Review the configuration, and then choose Create Delivery Stream.
Run the Python file for 1–2 minutes, and then press Ctrl+C to stop the execution. This loads some data into the stream for you to visualize in the next step.
Analyzing the data
Now it’s time to analyze the IoT streaming data using Amazon Kinesis Analytics.
Open the Amazon Kinesis Analytics console and create a new application. Give the application a name, and then choose Create Application.
On the next screen, choose Connect to a source. Choose the raw incoming data stream that you created earlier. (Note the stream name Source_SQL_STREAM_001 because you will need it later.)
Use the default settings for everything else. When the schema discovery process is complete, it displays a success message with the formatted stream sample in a table as shown in the following screenshot. Review the data, and then choose Save and continue.
Next, choose Go to SQL editor. When prompted, choose Yes, start application.
Copy the following SQL code and paste it into the SQL editor window.
CREATE OR REPLACE STREAM "TEMP_STREAM" (
"iotName" varchar (40),
"iotValue" integer,
"ANOMALY_SCORE" DOUBLE);
-- Creates an output stream and defines a schema
CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
"iotName" varchar(40),
"iotValue" integer,
"ANOMALY_SCORE" DOUBLE,
"created" TimeStamp);
-- Compute an anomaly score for each record in the source stream
-- using Random Cut Forest
CREATE OR REPLACE PUMP "STREAM_PUMP_1" AS INSERT INTO "TEMP_STREAM"
SELECT STREAM "iotName", "iotValue", ANOMALY_SCORE FROM
TABLE(RANDOM_CUT_FOREST(
CURSOR(SELECT STREAM * FROM "SOURCE_SQL_STREAM_001")
)
);
-- Sort records by descending anomaly score, insert into output stream
CREATE OR REPLACE PUMP "OUTPUT_PUMP" AS INSERT INTO "DESTINATION_SQL_STREAM"
SELECT STREAM "iotName", "iotValue", ANOMALY_SCORE, ROWTIME FROM "TEMP_STREAM"
ORDER BY FLOOR("TEMP_STREAM".ROWTIME TO SECOND), ANOMALY_SCORE DESC;
Choose Save and run SQL. As the application is running, it displays the results as stream data arrives. If you don’t see any data coming in, run the Python script again to generate some fresh data. When there is data, it appears in a grid as shown in the following screenshot.Note that you are selecting data from the source stream name Source_SQL_STREAM_001 that you created previously. Also note the ANOMALY_SCORE column. This is the value that the Random_Cut_Forest function calculates based on the temperature ranges provided by the Python script. Higher (anomaly) temperature ranges have a higher score.Looking at the SQL code, note that the first two blocks of code create two new streams to store temporary data and the final result. The third block of code analyzes the raw source data (Stream_Pump_1) using the Random_Cut_Forest function. It calculates an anomaly score (ANOMALY_SCORE) and inserts it into the TEMP_STREAM stream. The final code block loads the result stored in the TEMP_STREAM into DESTINATION_SQL_STREAM.
Choose Exit (done editing) next to the Save and run SQL button to return to the application configuration page.
Load processed data into the Kinesis Firehose delivery stream
Now, you can export the result from DESTINATION_SQL_STREAM into the Amazon Kinesis Firehose stream that you created previously.
On the application configuration page, choose Connect to a destination.
Choose the stream name that you created earlier, and use the default settings for everything else. Then choose Save and Continue.
On the application configuration page, choose Exit to Kinesis Analytics applications to return to the Amazon Kinesis Analytics console.
Run the Python script again for 4–5 minutes to generate enough data to flow through Amazon Kinesis Streams, Kinesis Analytics, Kinesis Firehose, and finally into the Amazon ES domain.
Open the Kinesis Firehose console, choose the stream, and then choose the Monitoring
As the processed data flows into Kinesis Firehose and Amazon ES, the metrics appear on the Delivery Stream metrics page. Keep in mind that the metrics page takes a few minutes to refresh with the latest data.
Open the Amazon Elasticsearch Service dashboard in the AWS Management Console. The count in the Searchable documents column increases as shown in the following screenshot. In addition, the domain shows a cluster health of Yellow. This is because, by default, it needs two instances to deploy redundant copies of the index. To fix this, you can deploy two instances instead of one.
Visualize the data using Kibana
Now it’s time to launch Kibana and visualize the data.
Use the ES domain link to go to the cluster detail page, and then choose the Kibana link as shown in the following screenshot. If you’re working behind a proxy or firewall, see the “Use a proxy to simplify request signing” section in this blog post to learn how to work with a proxy.
In the Kibana dashboard, choose the Discover tab to perform a query.
You can also visualize the data using the different types of charts offered by Kibana. For example, by going to the Visualize tab, you can quickly create a split bar chart that aggregates by ANOMALY_SCORE per minute.
Conclusion
In this post, you learned how to use Amazon Kinesis to collect, process, and analyze real-time streaming data, and then export the results to Amazon ES for analysis and visualization with Kibana. If you have comments about this post, add them to the “Comments” section below. If you have questions or issues with implementing this solution, please open a new thread on the Amazon Kinesis or Amazon ES discussion forums.
Tristan Li is a Solutions Architect with Amazon Web Services. He works with enterprise customers in the US, helping them adopt cloud technology to build scalable and secure solutions on AWS.
Do you want to teach the little kid to like books, while all she or he wants is games?
There is now a way to have both!
Sure, there are a lot of gamebooks, but they are targeted to teenagers. I will tell now of one that was written for children between three and nine years.
It is the tale of Gremmy – the little gremlin who goes to a big adventure. Who will climb The Big Mountain, or maybe will travel down The Deep River. Will venture into The Enchanted Forest, unless you would go with it inside The Dark Cave. Who will meet magical creatures and will face ingenious choices…
It is a tale you can read to your kids. Lead them through a kingdom of magic and wonder, meet them with its inhabitants and have them make their choices and see their funny and witty results. Nurture their curiosity and imagination, while also teaching them wise and important things.
The author – Nikola Raykov – is the youngest writer ever to win the most prestigious award for children’s literature in Bulgaria. The number of copies in Bulgarian that have been sold is higher than the typical for a book by Stephen King or Paulo Coelho! Since some time, it has been published also in Russian, Italian and Latvian. And now you can have the English translation.
Most gamebooks will have few illustrations, typically black-and-white ones. GameTale is full of excellent true color ones, as a book for children must be. And it provides not only entertainment, but also value.
Don’t you believe it? Take a look yourself – the entire book is available freely on the author’s website, even before it is printed – to read and play it, to download and enjoy it. Like all of its translations and the Bulgarian original. Yes, all these sales were done while the book has been available to everybody. The ability of the readers to see what they are buying has been its best advertisement.
Here is what the writer says:
“I believe it would be cruel if children weren’t able to enjoy my books because their parents could not afford them, and children’s authors should not be cruel. They should be gentle, caring and loving. The values we write about should not be just words on paper. We should be the living and breathing examples of those values, because what we write HAS to be true. Every good author will tell you that you cannot lie to your readers (or little listeners). They will catch you in a second. When you read a book, you can actually feel if the author is being honest about his or her inner self.”
“I DO believe that people are inherently good. If you have poured your heart into something, if you have tried your best, people will feel that and give you their unconditional support. There is no need to hide your work: people are not thieves! If you share, they will care, they will follow you, they will nag you about when your next book comes out, and yes, they will gladly support you because they will know that their children’s favorite author actually believes in the values he’s writing about. The same things they believe in – friendship, love and freedom!”
Nikola started a campaign on Kickstarter. Its goal is to fund the printing of 1000 copies of the book in English. And you do get for your donations things your kid will love!
Years ago, when I read this book, I felt like a kid. And now envy you a little for the joy that you will get from it. Do give it a try. There is nothing to lose, and a lot to win!
How do you go about learning about yourself? Has your view of yourself changed recently? How did you handle it?
Whoof. That’s incredibly abstract and open-ended — there’s a lot I could say, but most of it is hard to turn into words.
The first example to come to mind — and the most conspicuous, at least from where I’m sitting — has been the transition from technical to creative since quitting my tech job. I think I touched on this a year ago, but it’s become all the more pronounced since then.
I quit in part because I wanted more time to work on my own projects. Two years ago, those projects included such things as: giving the Python ecosystem a better imaging library, designing an alternative to regular expressions, building a Very Correct IRC bot framework, and a few more things along similar lines. The goals were all to solve problems — not hugely important ones, but mildly inconvenient ones that I thought I could bring something novel to. Problem-solving for its own sake.
Now that I had all the time in the world to work on these things, I… didn’t. It turned out they were almost as much of a slog as my job had been!
The problem, I think, was that there was no point.
This was really weird to realize and come to terms with. I do like solving problems for its own sake; it’s interesting and educational. And most of the programming folks I know and surround myself with have that same drive and use it to create interesting tools like Twisted. So besides taking for granted that this was the kind of stuff I wanted to do, it seemed like the kind of stuff I should want to do.
But even if I create a really interesting tool, what do I have? I don’t have a thing; I have a tool that can be used to build things. If I want a thing, I have to either now build it myself — starting from nearly zero despite all the work on the tool, because it can only do so much in isolation — or convince a bunch of other people to use my tool to build things. Then they’d be depending on my tool, which means I have to maintain and support it, which is even more time and effort poured into this non-thing.
Despite frequently being drawn to think about solving abstract tooling problems, it seems I truly want to make things. This is probably why I have a lot of abandoned projects boldly described as “let’s solve X problem forever!” — I go to scratch the itch, I do just enough work that it doesn’t itch any more, and then I lose interest.
I spent a few months quietly flailing over this minor existential crisis. I’d spent years daydreaming about making tools; what did I have if not that drive? I was having to force myself to work on what I thought were my passion projects.
Meanwhile, I’d vaguely intended to do some game development, but for some reason dragged my feet forever and then took my sweet time dipping my toes in the water. I did work on a text adventure, Runed Awakening, on and off… but it was a fractal of creative decisions and I had a hard time making all of them. It might’ve been too ambitious, despite feeling small, and that might’ve discouraged me from pursuing other kinds of games earlier.
A big part of it might have been the same reason I took so long to even give art a serious try. I thought of myself as a technical person, and art is a thing for creative people, so I’m simply disqualified, right? Maybe the same thing applies to games.
Lord knows I had enough trouble when I tried. I’d orbited the Doom community for years but never released a single finished level. I did finally give it a shot again, now that I had the time. Six months into my funemployment, I wrote a three-part guide on making Doom levels. Three months after that, I finally released one of my own.
I suppose that opened the floodgates; a couple weeks later, glip and I decided to try making something for the PICO-8, and then we did that (almost exactly a year ago!). Then kept doing it.
It’s been incredibly rewarding — far moreso than any “pure” tooling problem I’ve ever approached. Moreso than even something like veekun, which is a useful thing. People have thoughts and opinions on games. Games give people feelings, which they then tell you about. Most of the commentary on a reference website is that something is missing or incorrect.
I like doing creative work. There was never a singular moment when this dawned on me; it was a slow process over the course of a year or more. I probably should’ve had an inkling when I started drawing, half a year before I quit; even my early (and very rough) daily comics made people laugh, and I liked that a lot. Even the most well-crafted software doesn’t tend to bring joy to people, but amateur art can.
I still like doing technical work, but I prefer when it’s a means to a creative end. And, just as important, I prefer when it has a clear and constrained scope. “Make a library/tool for X” is a nebulous problem that could go in a great many directions; “make a bot that tweets Perlin noise” has a pretty definitive finish line. It was interesting to write a little physics engine, but I would’ve hated doing it if it weren’t for a game I were making and didn’t have the clear scope of “do what I need for this game”.
It feels like creative work is something I’ve been wanting to do for a long time. If this were a made-for-TV movie, I would’ve discovered this impulse one day and immediately revealed myself as a natural-born artistic genius of immense unrealized talent.
That didn’t happen. Instead I’ve found that even something as mundane as having ideas is a skill, and while it’s one I enjoy, I’ve barely ever exercised it at all. I have plenty of ideas with technical work, but I run into brick walls all the time with creative stuff.
How do I theme this area? Well, I don’t know. How do I think of something? I don’t know that either. It’s a strange paradox to have an urge to create things but not quite know what those things are.
It’s such a new and completely different kind of problem. There’s no right answer, or even an answer I can check for “correctness”. I can do anything. With no landmarks to start from, it’s easy to feel completely lost and just draw blanks.
I’ve essentially recalibrated the texture of stuff I work on, and I have to find some completely new ways to approach problems. I haven’t found them yet. I don’t think they’re anything that can be told or taught. But I’m starting to get there, and part of it is just accepting that I can’t treat these like problems with clear best solutions and clear algorithms to find those solutions.
A particularly glaring irony is that I’ve had a really tough problem designing abstract spaces, even though that’s exactly the kind of architecture I praise in Doom. It’s much trickier than it looks — a good abstract design is reminiscent of something without quite being that something.
I suppose it’s similar to a struggle I’ve had with art. I’m drawn to a cartoony style, and cartooning is also a mild form of abstraction, of whittling away details to leave only what’s most important. I’m reminded in particular of the forest background in fox flux — I was completely lost on how to make something reminiscent of a tree line. I knew enough to know that drawing trees would’ve made the background far too busy, but trees are naturally busy, so how do you represent that?
The answer glip gave me was to make big chunky leaf shapes around the edges and where light levels change. Merely overlapping those shapes implies depth well enough to convey the overall shape of the tree. The result works very well and looks very simple — yet it took a lot of effort just to get to the idea.
It reminds me of mathematical research, in a way? You know the general outcome you want, and you know the tools at your disposal, and it’s up to you to make some creative leaps. I don’t think there’s a way to directly learn how to approach that kind of problem; all you can do is look at what others have done and let it fuel your imagination.
I think I’m getting a little distracted here, but this is stuff that’s been rattling around lately.
If there’s a more personal meaning to the tree story, it’s that this is a thing I can do. I can learn it, and it makes sense to me, despite being a huge nerd.
Two and a half years ago, I never would’ve thought I’d ever make an entire game from scratch and do all the art for it. It was completely unfathomable. Maybe we can do a lot of things we don’t expect we’re capable of, if only we give them a serious shot.
And ask for help, of course. I have a hell of a time doing that. I did a painting recently that factored in mountains of glip’s advice, and on some level I feel like I didn’t quite do it myself, even though every stroke was made by my hand. Hell, I don’t even look at references nearly as much as I should. It feels like cheating, somehow? I know that’s ridiculous, but my natural impulse is to put my head down and figure it out myself. Maybe I’ve been doing that for too long with programming. Trust me, it doesn’t work quite so well in a brand new field.
I’m getting distracted again!
To answer your actual questions: how do I go about learning about myself? I don’t! It happens completely by accident. I’ll consciously examine my surface-level thoughts or behaviors or whatever, sure, but the serious fundamental revelations have all caught me completely by surprise — sometimes slowly, sometimes suddenly.
Most of them also came from listening to the people who observe me from the outside: I only started drawing in the first place because of some ridiculous deal I made with glip. At the time I thought they just wanted everyone to draw because art is their thing, but now I’m starting to suspect they’d caught on after eight years of watching me lament that I couldn’t draw.
I don’t know how I handle such discoveries, either. What is handling? I imagine someone discovering something and trying to come to grips with it, but I don’t know that I have quite that experience — my grappling usually comes earlier, when I’m still trying to figure the thing out despite not knowing that there’s a thing to find out. Once I know it, it’s on the table; I can’t un-know it or reject it meaningfully. All I can do is figure out what to do with it, and I approach that the same way I approach every other problem: by flailing at it and hoping for the best.
This isn’t quite 2000 words. Sorry. I’ve run out of things to say about me. This paragraph is very conspicuous filler. Banana. Atmosphere. Vocation.
AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD, now supports Microsoft Remote Desktop Licensing Manager (RD Licensing). By using AWS Microsoft AD as the directory for your Remote Desktop Services solution, you reduce the time it takes to deploy remote desktop solutions on Amazon EC2 for Windows Server instances, and you enable your users to use remote desktops with the credentials they already know. In this blog post, I explain how to deploy RD Licensing Manager on AWS Microsoft AD to enable your users to sign in to remote desktops by using credentials stored in an AWS Microsoft AD or an on-premises Active Directory (AD) domain.
Enable your AWS Microsoft AD users to open remote desktop sessions
To use RD Licensing, you must authorize RD Licensing servers in the same Active Directory domain as the Windows Remote Desktop Session Hosts (RD Session Hosts) by adding them to the Terminal Service Licensing Server security group in AD. This new release grants your AWS Microsoft AD administrative account permissions to do this. As a result, you can now deploy RD Session Hosts in the AWS Cloud without the extra time and effort to set up and configure your own AD domain on Amazon EC2 for Windows Server.
The following diagram illustrates the steps to set up remote desktops with RD Licensing with users in AWS Microsoft AD and shows what happens when users connect to remote desktops.
In detail, here is how the process works, as it is illustrated in the preceding diagram:
Create EC2 for Windows Server instances to use as your RD Licensing servers (RDLS1 in the preceding diagram). Add the instances to the same domain to which you will join your Windows Remote Desktop Session Hosts (RD Session Hosts).
Configure your EC2 for Windows Server instances as RD Licensing servers and add them to the Terminal Service Licensing Servers security group in AWS Microsoft AD. You can connect to the instances from the AWS Management Console to configure RD Licensing. You also can use Active Directory Users and Computers to add the RD Licensing servers to the security group, thereby authorizing the instances for RD Licensing.
Create other hosts for use as RD Session Hosts (RDSH1 in the diagram). Add the hosts to the same domain as your RD Licensing servers.
A user (in this case jsmith) attempts to open an RDS session.
The RD Session Host requests an RDS CAL from the RD Licensing Server.
The RD Licensing Server returns an RDS CAL to the RD Session Host.
Because the user exists in AWS Microsoft AD, authentication happens against AWS Microsoft AD. The order of authentication relative to session creation depends on whether you configure your RD Session Host for Network Level Authentication.
Enable your users to open remote desktop sessions with their on-premises credentials
If you have an on-premises AD domain with users, your users can open remote desktop sessions with their on-premises credentials if you create a forest trust from AWS Microsoft AD to your Active Directory. The trust enables using on-premises credentials without the need for complex directory synchronization or replication. The following diagram illustrates how to configure a system using the same steps as in the previous section, except that you must create a one-way trust to your on-premises domain in Step 1a. With the trust in place, AWS Microsoft AD refers the RD Session Host to the on-premises domain for authentication.
Summary
In this post, I have explained how to authorize RD Licensing in AWS Microsoft AD to support EC2-based remote desktop sessions for AWS managed users and on-premises AD managed users. To learn more about how to use AWS Microsoft AD, see the AWS Directory Service documentation. For general information and pricing, see the AWS Directory Service home page.
If you have comments about this blog post, submit a comment in the “Comments” section below. If you have implementation or troubleshooting questions, please start a new thread on the Directory Service forum.
In case you missed any AWS Security Blog posts published so far in 2017, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from protecting dynamic web applications against DDoS attacks to monitoring AWS account configuration changes and API calls to Amazon EC2 security groups.
March
March 22:How to Help Protect Dynamic Web Applications Against DDoS Attacks by Using Amazon CloudFront and Amazon Route 53 Using a content delivery network (CDN) such as Amazon CloudFront to cache and serve static text and images or downloadable objects such as media files and documents is a common strategy to improve webpage load times, reduce network bandwidth costs, lessen the load on web servers, and mitigate distributed denial of service (DDoS) attacks. AWS WAF is a web application firewall that can be deployed on CloudFront to help protect your application against DDoS attacks by giving you control over which traffic to allow or block by defining security rules. When users access your application, the Domain Name System (DNS) translates human-readable domain names (for example, www.example.com) to machine-readable IP addresses (for example, 192.0.2.44). A DNS service, such as Amazon Route 53, can effectively connect users’ requests to a CloudFront distribution that proxies requests for dynamic content to the infrastructure hosting your application’s endpoints. In this blog post, I show you how to deploy CloudFront with AWS WAF and Route 53 to help protect dynamic web applications (with dynamic content such as a response to user input) against DDoS attacks. The steps shown in this post are key to implementing the overall approach described in AWS Best Practices for DDoS Resiliency and enable the built-in, managed DDoS protection service, AWS Shield.
March 21:New AWS Encryption SDK for Python Simplifies Multiple Master Key Encryption The AWS Cryptography team is happy to announce a Python implementation of the AWS Encryption SDK. This new SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards. The SDK also includes ready-to-use examples. If you are a Java developer, you can refer to this blog post to see specific Java examples for the SDK. In this blog post, I show you how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your encryption keys in ways that help improve application availability by not tying you to a single region or key management solution.
March 21:Updated CJIS Workbook Now Available by Request The need for guidance when implementing Criminal Justice Information Services (CJIS)–compliant solutions has become of paramount importance as more law enforcement customers and technology partners move to store and process criminal justice data in the cloud. AWS services allow these customers to easily and securely architect a CJIS-compliant solution when handling criminal justice data, creating a durable, cost-effective, and secure IT infrastructure that better supports local, state, and federal law enforcement in carrying out their public safety missions. AWS has created several documents (collectively referred to as the CJIS Workbook) to assist you in aligning with the FBI’s CJIS Security Policy. You can use the workbook as a framework for developing CJIS-compliant architecture in the AWS Cloud. The workbook helps you define and test the controls you operate, and document the dependence on the controls that AWS operates (compute, storage, database, networking, regions, Availability Zones, and edge locations).
March 9:New Cloud Directory API Makes It Easier to Query Data Along Multiple Dimensions Today, we made available a new Cloud Directory API, ListObjectParentPaths, that enables you to retrieve all available parent paths for any directory object across multiple hierarchies. Use this API when you want to fetch all parent objects for a specific child object. The order of the paths and objects returned is consistent across iterative calls to the API, unless objects are moved or deleted. In case an object has multiple parents, the API allows you to control the number of paths returned by using a paginated call pattern. In this blog post, I use an example directory to demonstrate how this new API enables you to retrieve data across multiple dimensions to implement powerful applications quickly.
March 8:How to Access the AWS Management Console Using AWS Microsoft AD and Your On-Premises Credentials AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, is a managed Microsoft Active Directory (AD) hosted in the AWS Cloud. Now, AWS Microsoft AD makes it easy for you to give your users permission to manage AWS resources by using on-premises AD administrative tools. With AWS Microsoft AD, you can grant your on-premises users permissions to resources such as the AWS Management Console instead of adding AWS Identity and Access Management (IAM) user accounts or configuring AD Federation Services (AD FS) with Security Assertion Markup Language (SAML). In this blog post, I show how to use AWS Microsoft AD to enable your on-premises AD users to sign in to the AWS Management Console with their on-premises AD user credentials to access and manage AWS resources through IAM roles.
March 7:How to Protect Your Web Application Against DDoS Attacks by Using Amazon Route 53 and an External Content Delivery Network Distributed Denial of Service (DDoS) attacks are attempts by a malicious actor to flood a network, system, or application with more traffic, connections, or requests than it is able to handle. To protect your web application against DDoS attacks, you can use AWS Shield, a DDoS protection service that AWS provides automatically to all AWS customers at no additional charge. You can use AWS Shield in conjunction with DDoS-resilient web services such as Amazon CloudFront and Amazon Route 53 to improve your ability to defend against DDoS attacks. Learn more about architecting for DDoS resiliency by reading the AWS Best Practices for DDoS Resiliency whitepaper. You also have the option of using Route 53 with an externally hosted content delivery network (CDN). In this blog post, I show how you can help protect the zone apex (also known as the root domain) of your web application by using Route 53 to perform a secure redirect to prevent discovery of your application origin.
February 23:s2n Is Now Handling 100 Percent of SSL Traffic for Amazon S3 Today, we’ve achieved another important milestone for securing customer data: we have replaced OpenSSL with s2n for all internal and external SSL traffic in Amazon Simple Storage Service (Amazon S3) commercial regions. This was implemented with minimal impact to customers, and multiple means of error checking were used to ensure a smooth transition, including client integration tests, catching potential interoperability conflicts, and identifying memory leaks through fuzz testing.
February 22:Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials. IAM roles for EC2 make it easier for your applications to make API requests securely from an instance because they do not require you to manage AWS security credentials that the applications use. Recently, we enabled you to use temporary security credentials for your applications by attaching an IAM role to an existing EC2 instance by using the AWS CLI and SDK. To learn more, see New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI. Starting today, you can attach an IAM role to an existing EC2 instance from the EC2 console. You can also use the EC2 console to replace an IAM role attached to an existing instance. In this blog post, I will show how to attach an IAM role to an existing EC2 instance from the EC2 console.
February 22:How to Audit Your AWS Resources for Security Compliance by Using Custom AWS Config Rules AWS Config Rules enables you to implement security policies as code for your organization and evaluate configuration changes to AWS resources against these policies. You can use Config rules to audit your use of AWS resources for compliance with external compliance frameworks such as CIS AWS Foundations Benchmark and with your internal security policies related to the US Health Insurance Portability and Accountability Act (HIPAA), the Federal Risk and Authorization Management Program (FedRAMP), and other regimes. AWS provides some predefined, managed Config rules. You also can create custom Config rules based on criteria you define within an AWS Lambda function. In this post, I show how to create a custom rule that audits AWS resources for security compliance by enabling VPC Flow Logs for an Amazon Virtual Private Cloud (VPC). The custom rule meets requirement 4.3 of the CIS AWS Foundations Benchmark: “Ensure VPC flow logging is enabled in all VPCs.”
February 13:How to Enable Multi-Factor Authentication for AWS Services by Using AWS Microsoft AD and On-Premises Credentials You can now enable multi-factor authentication (MFA) for users of AWS services such as Amazon WorkSpaces and Amazon QuickSight and their on-premises credentials by using your AWS Directory Service for Microsoft Active Directory (Enterprise Edition) directory, also known as AWS Microsoft AD. MFA adds an extra layer of protection to a user name and password (the first “factor”) by requiring users to enter an authentication code (the second factor), which has been provided by your virtual or hardware MFA solution. These factors together provide additional security by preventing access to AWS services, unless users supply a valid MFA code.
February 13:How to Create an Organizational Chart with Separate Hierarchies by Using Amazon Cloud Directory Amazon Cloud Directory enables you to create directories for a variety of use cases, such as organizational charts, course catalogs, and device registries. Cloud Directory offers you the flexibility to create directories with hierarchies that span multiple dimensions. For example, you can create an organizational chart that you can navigate through separate hierarchies for reporting structure, location, and cost center. In this blog post, I show how to use Cloud Directory APIs to create an organizational chart with two separate hierarchies in a single directory. I also show how to navigate the hierarchies and retrieve data. I use the Java SDK for all the sample code in this post, but you can use other language SDKs or the AWS CLI.
February 10:How to Easily Log On to AWS Services by Using Your On-Premises Active Directory AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as Microsoft AD, now enables your users to log on with just their on-premises Active Directory (AD) user name—no domain name is required. This new domainlesslogon feature makes it easier to set up connections to your on-premises AD for use with applications such as Amazon WorkSpaces and Amazon QuickSight, and it keeps the user logon experience free from network naming. This new interforest trusts capability is now available when using Microsoft AD with Amazon WorkSpaces and Amazon QuickSight Enterprise Edition. In this blog post, I explain how Microsoft AD domainless logon works with AD interforest trusts, and I show an example of setting up Amazon WorkSpaces to use this capability.
February 9:New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials that AWS creates, distributes, and rotates automatically. Using temporary credentials is an IAM best practice because you do not need to maintain long-term keys on your instance. Using IAM roles for EC2 also eliminates the need to use long-term AWS access keys that you have to manage manually or programmatically. Starting today, you can enable your applications to use temporary security credentials provided by AWS by attaching an IAM role to an existing EC2 instance. You can also replace the IAM role attached to an existing EC2 instance. In this blog post, I show how you can attach an IAM role to an existing EC2 instance by using the AWS CLI.
January 30:How to Protect Data at Rest with Amazon EC2 Instance Store Encryption Encrypting data at rest is vital for regulatory compliance to ensure that sensitive data saved on disks is not readable by any user or application without a valid key. Some compliance regulations such as PCI DSS and HIPAA require that data at rest be encrypted throughout the data lifecycle. To this end, AWS provides data-at-rest options and key management to support the encryption process. For example, you can encrypt Amazon EBS volumes and configure Amazon S3 buckets for server-side encryption (SSE) using AES-256 encryption. Additionally, Amazon RDS supports Transparent Data Encryption (TDE). Instance storage provides temporary block-level storage for Amazon EC2 instances. This storage is located on disks attached physically to a host computer. Instance storage is ideal for temporary storage of information that frequently changes, such as buffers, caches, and scratch data. By default, files stored on these disks are not encrypted. In this blog post, I show a method for encrypting data on Linux EC2 instance stores by using Linux built-in libraries. This method encrypts files transparently, which protects confidential data. As a result, applications that process the data are unaware of the disk-level encryption.
January 27:How to Detect and Automatically Remediate Unintended Permissions in Amazon S3 Object ACLs with CloudWatch Events Amazon S3Access Control Lists (ACLs) enable you to specify permissions that grant access to S3 buckets and objects. When S3 receives a request for an object, it verifies whether the requester has the necessary access permissions in the associated ACL. For example, you could set up an ACL for an object so that only the users in your account can access it, or you could make an object public so that it can be accessed by anyone. If the number of objects and users in your AWS account is large, ensuring that you have attached correctly configured ACLs to your objects can be a challenge. For example, what if a user were to call the PutObjectAcl API call on an object that is supposed to be private and make it public? Or, what if a user were to call the PutObject with the optional Acl parameter set to public-read, therefore uploading a confidential file as publicly readable? In this blog post, I show a solution that uses Amazon CloudWatch Events to detect PutObject and PutObjectAcl API calls in near-real time and helps ensure that the objects remain private by making automatic PutObjectAcl calls, when necessary.
January 24:New SOC 2 Report Available: Confidentiality As with everything at Amazon, the success of our security and compliance program is primarily measured by one thing: our customers’ success. Our customers drive our portfolio of compliance reports, attestations, and certifications that support their efforts in running a secure and compliant cloud environment. As a result of our engagement with key customers across the globe, we are happy to announce the publication of our new SOC 2 Confidentiality report. This report is available now through AWS Artifact in the AWS Management Console.
January 18:Compliance in the Cloud for New Financial Services Cybersecurity Regulations Financial regulatory agencies are focused more than ever on ensuring responsible innovation. Consequently, if you want to achieve compliance with financial services regulations, you must be increasingly agile and employ dynamic security capabilities. AWS enables you to achieve this by providing you with the tools you need to scale your security and compliance capabilities on AWS. The following breakdown of the most recent cybersecurity regulations, NY DFS Rule 23 NYCRR 500, demonstrates how AWS continues to focus on your regulatory needs in the financial services sector.
January 9:New Amazon GameDev Blog Post: Protect Multiplayer Game Servers from DDoS Attacks by Using Amazon GameLift In online gaming, distributed denial of service (DDoS) attacks target a game’s network layer, flooding servers with requests until performance degrades considerably. These attacks can limit a game’s availability to players and limit the player experience for those who can connect. Today’s new Amazon GameDev Blog post uses a typical game server architecture to highlight DDoS attack vulnerabilities and discusses how to stay protected by using built-in AWS Cloud security, AWS security best practices, and the security features of Amazon GameLift. Read the post to learn more.
January 6:The Top 10 Most Downloaded AWS Security and Compliance Documents in 2016 The following list includes the 10 most downloaded AWS security and compliance documents in 2016. Using this list, you can learn about what other people found most interesting about security and compliance last year.
January 6:FedRAMP Compliance Update: AWS GovCloud (US) Region Receives a JAB-Issued FedRAMP High Baseline P-ATO for Three New Services Three new services in the AWS GovCloud (US) region have received a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB) under the Federal Risk and Authorization Management Program (FedRAMP). JAB issued the authorization at the High baseline, which enables US government agencies and their service providers the capability to use these services to process the government’s most sensitive unclassified data, including Personal Identifiable Information (PII), Protected Health Information (PHI), Controlled Unclassified Information (CUI), criminal justice information (CJI), and financial data.
January 4:The Top 20 Most Viewed AWS IAM Documentation Pages in 2016 The following 20 pages were the most viewed AWS Identity and Access Management (IAM) documentation pages in 2016. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research.
January 3:The Most Viewed AWS Security Blog Posts in 2016 The following 10 posts were the most viewed AWS Security Blog posts that we published during 2016. You can use this list as a guide to catch up on your blog reading or even read a post again that you found particularly useful.
January 3:How to Monitor AWS Account Configuration Changes and API Calls to Amazon EC2 Security Groups You can use AWS security controls to detect and mitigate risks to your AWS resources. The purpose of each security control is defined by its control objective. For example, the control objective of an Amazon VPC security group is to permit only designated traffic to enter or leave a network interface. Let’s say you have an Internet-facing e-commerce website, and your security administrator has determined that only HTTP (TCP port 80) and HTTPS (TCP 443) traffic should be allowed access to the public subnet. As a result, your administrator configures a security group to meet this control objective. What if, though, someone were to inadvertently change this security group’s rules and enable FTP or other protocols to access the public subnet from any location on the Internet? That expanded access could weaken the security posture of your assets. Consequently, your administrator might need to monitor the integrity of your company’s security controls so that the controls maintain their desired effectiveness. In this blog post, I explore two methods for detecting unintended changes to VPC security groups. The two methods address not only control objectives but also control failures.
If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the forum identified near the end of each post.
With Zelda: Breath of the Wild out on the Nintendo Switch, I made a home automation system based off the Zelda series using the ocarina from The Legend of Zelda: Ocarina of Time. Help Me Make More Awesome Stuff! https://www.patreon.com/sufficientlyadvanced Subscribe! http://goo.gl/xZvS5s Follow Sufficiently Advanced!
Listen!
Released in 1998, The Legend of Zelda: Ocarina of Timeis the best game ever is still an iconic entry in the retro gaming history books.
Very few games have stuck with me in the same way Ocarina has, and I think it’s fair to say that, with the continued success of the Zelda franchise, I’m not the only one who has a special place in their heart for Link, particularly in this musical outing.
Thanks to Cynosure Gaming‘s Ocarina of Time review for the image.
Allen, or Sufficiently Advanced, as his YouTube subscribers know him, has used a Raspberry Pi to detect and recognise key tunes from the game, with each tune being linked (geddit?) to a specific task. By playing Zelda’s Lullaby (E, G, D, E, G, D), for instance, Allen can lock or unlock the door to his house. Other tunes have different functions: Epona’s Song unlocks the car (for Ocarina noobs, Epona is Link’s horse sidekick throughout most of the game), and Minuet of Forest waters the plants.
So how does it work?
It’s a fairly simple setup based around note recognition. When certain notes are played in a specific sequence, the Raspberry Pi detects the tune via a microphone within the Amazon Echo-inspired body of the build, and triggers the action related to the specific task. The small speaker you can see in the video plays a confirmation tune, again taken from the video game, to show that the task has been completed.
As for the tasks themselves, Allen has built a small controller for each action, whether it be a piece of wood that presses down on his car key, a servomotor that adjusts the ambient temperature, or a water pump to hydrate his plants. Each controller has its own small ESP8266 wireless connectivity module that links back to the wireless-enabled Raspberry Pi, cutting down on the need for a ton of wires about the home.
And yes, before anybody says it, we’re sure that Allen is aware that using tone recognition is not the safest means of locking and unlocking your home. This is just for fun.
Do-it-yourself home automation
While we don’t necessarily expect everyone to brush up on their ocarina skills and build their own Zelda-inspired home automation system, the idea of using something other than voice or text commands to control home appliances is a fun one.
You could use facial recognition at the door to start the kettle boiling, or the detection of certain gasses to – ahem!– spray an air freshener.
We love to see what you all get up to with the Raspberry Pi. Have you built your own home automation system controlled by something other than your voice? Share it in the comments below.
AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, is a managed Microsoft Active Directory (AD) hosted in the AWS Cloud. Now, AWS Microsoft AD makes it easy for you to give your users permission to manage AWS resources by using on-premises AD administrative tools. With AWS Microsoft AD, you can grant your on-premises users permissions to resources such as the AWS Management Console instead of adding AWS Identity and Access Management (IAM) user accounts or configuring AD Federation Services (AD FS) with Security Assertion Markup Language (SAML).
In this blog post, I show how to use AWS Microsoft AD to enable your on-premises AD users to sign in to the AWS Management Console with their on-premises AD user credentials to access and manage AWS resources through IAM roles.
Background
AWS customers use on-premises AD to administer user accounts, manage group memberships, and control access to on-premises resources. If you are like many AWS Microsoft AD customers, you also might want to enable your users to sign in to the AWS Management Console using on-premises AD credentials to manage AWS resources such as Amazon EC2, Amazon RDS, and Amazon S3.
Enabling such sign-in permissions has four key benefits:
Your on-premises AD group administrators can now manage access to AWS resources with standard AD administration tools instead of IAM.
Your users need to remember only one identity to sign in to AD and the AWS Management Console.
Because users sign in with their on-premises AD credentials, access to the AWS Management Console benefits from your AD-enforced password policies.
When you remove a user from AD, AWS Microsoft AD and IAM automatically revoke their access to AWS resources.
IAM roles provide a convenient way to define permissions to manage AWS resources. By using an AD trust between AWS Microsoft AD and your on-premises AD, you can assign your on-premises AD users and groups to IAM roles. This gives the assigned users and groups the IAM roles’ permissions to manage AWS resources. By assigning on-premises AD groups to IAM roles, you can now manage AWS access through standard AD administrative tools such as AD Users and Computers (ADUC).
After you assign your on-premises users or groups to IAM roles, your users can sign in to the AWS Management Console with their on-premises AD credentials. From there, they can select from a list of their assigned IAM roles. After they select a role, they can perform the management functions that you assigned to the IAM role.
In the rest of this post, I show you how to accomplish this in four steps:
Create an access URL.
Enable AWS Management Console access.
Assign on-premises users and groups to IAM roles.
Connect to the AWS Management Console.
Prerequisites
The instructions in this blog post require you to have the following components running:
Note: You can assign IAM roles to user identities stored in AWS Microsoft AD. For this post, I focus on assigning IAM roles to user identities stored in your on-premises AD. This requires a forest trust relationship between your on-premises Active Directory and your AWS Microsoft AD directory.
Solution overview
For the purposes of this post, I am the administrator who manages both AD and IAM roles in my company. My company wants to enable all employees to use on-premises credentials to sign in to the AWS Management Console to access and manage their AWS resources. My company uses EC2, RDS, and S3. To manage administrative permissions to these resources, I created a role for each service that gives full access to the service. I named these roles EC2FullAccess, RDSFullAccess, and S3FullAccess.
My company has two teams with different responsibilities, and we manage users in AD security groups. Mary is a member of the DevOps security group and is responsible for creating and managing our RDS databases, running data collection applications on EC2, and archiving information in S3. John and Richard are members of the BIMgrs security group and use EC2 to run analytics programs against the database. Though John and Richard need access to the database and archived information, they do not need to operate those systems. They do need permission to administer their own EC2 instances.
To grant appropriate access to the AWS resources, I need to assign the BIMgrs security group in AD to the EC2FullAccess role in IAM, and I need to assign the DevOps group to all three roles (EC2FullAccess, RDSFullAccess, and S3FullAccess). Also, I want to make sure all our employees have adequate time to complete administrative actions after signing in to the AWS Management Console, so I increase the console session timeout from 60 minutes to 240 minutes (4 hours).
The following diagram illustrates the relationships between my company’s AD users and groups and my company’s AWS roles and services. The left side of the diagram represents my on-premises AD that contains users and groups. The right side represents the AWS Cloud that contains the AWS Management Console, AWS resources, IAM roles, and our AWS Microsoft AD directory connected to our on-premises AD via a forest trust relationship.
Let’s get started with the steps for this scenario. For this post, I have already created an AWS Microsoft AD directory and established a two-way forest trust from AWS Microsoft AD to my on-premises AD. To manage access to AWS resources, I have also created the following IAM roles:
EC2FullAccess: Provides full access to EC2 and has the AmazonEC2FullAccess AWS managed policy attached.
RDSFullAccess: Provides full access to RDS via the AWS Management Console and has the AmazonRDSFullAccess managed policy attached.
S3FullAccess: Provides full access to S3 via the AWS Management Console and has the AmazonS3FullAccess managed policy attached.
To learn more about how to create IAM roles and attach managed policies, see Attaching Managed Policies.
Note: You must include a Directory Service trust policy on all roles that require access by users who sign in to the AWS Management Console using Microsoft AD. To learn more, see Editing the Trust Relationship for an Existing Role.
Step 1 – Create an access URL
The first step to enabling access to the AWS Management Console is to create a unique Access URL for your AWS Microsoft AD directory. An Access URL is a globally unique URL. AWS applications, such as the AWS Management Console, use the URL to connect to the AWS sign-in page that is linked to your AWS Microsoft AD directory. The Access URL does not provide any other access to your directory. To learn more about Access URLs, see Creating an Access URL.
On the Directory Details page, choose the Apps & Services tab, type a unique access alias in the Access URL box, and then choose Create Access URL to create an Access URL for your directory.
Your directory Access URL should be in the following format: <access-alias>.awsapps.com. In this example, I am using https://example-corp.awsapps.com.
Step 2 – Enable AWS Management Console access
To allow users to sign in to AWS Management Console with their on-premises credentials, you must enable AWS Management Console access for your AWS Microsoft AD directory:
From the Directory Service console, choose your AWS Microsoft AD Directory ID. Choose the AWS Management Console link in the AWS apps & services section.
In the Enable AWS Management Console dialog box, choose Enable Access to enable console access for your directory.
This enables AWS Management Console access for your AWS Microsoft AD directory and provides you a URL that you can use to connect to the console. The URL is generated by appending “/console” to the end of the access URL that you created in Step 1: <access-alias>.awsapps.com/console. In this example, the AWS Management Console URL is https://example-corp.awsapps.com/console.
Step 3 – Assign on-premises users and groups to IAM roles
Before you users can use your Access URL to sign in to the AWS Management Console, you need to assign on-premises users or groups to IAM roles. This critical step enables you to control which AWS resources your on-premises users and groups can access from the AWS Management Console.
In my on-premises Active Directory, Mary is already a member of the DevOps group, and John and Richard are members of the BIMgrs group. I already set up the trust from AWS Microsoft AD to my on-premises AD, and I already created the EC2FullAccess, RDSFullAccess, and S3FullAccess roles that I will use.
I am now ready to assign on-premises groups to IAM roles. I do this by assigning the DevOps group to the EC2FullAccess, RDSFullAccess, and S3FullAccess IAM roles, and the BIMgrs group to the EC2FullAccess IAM role. Follow these steps to assign on-premises groups to IAM roles:
Open the Directory Service details page of your AWS Microsoft AD directory and choose the AWS Management Console link on the Apps & services tab. Choose Continue to navigate to the Add Users and Groups to Roles page.
On the Add Users and Groups to Roles page, I see the three IAM roles that I have already configured (shown in the following screenshot). If you do not have any IAM roles with a Directory Service trust policy enabled, you can create new roles or enable Directory Service for existing roles.
I will now assign the on-premises DevOps and BIMgrs groups to the EC2FullAccess role. To do so, I choose the EC2FullAccess IAM role link to navigate to the Role Detail page. Next, I choose the Add button to assign users or groups to the role, as shown in the following screenshot.
In the Add Users and Groups to Role pop-up window, I select the on-premises Active Directory forest that contains the users and groups to assign. In this example, that forest is amazondomains.com. Note: If you do not use a trust to an on-premises AD and you create users and groups in your AWS Microsoft AD directory, you can choose the default this forest to search for users in Microsoft AD.
To assign an Active Directory group, choose the Group filter above the Search for field. Type the name of the Active Directory group in the search box and choose the search button (the magnifying glass). You can see that I was able to search for the DevOps group from my on-premises Active Directory.
In this case, I added the on-premises groups, DevOps and BIMgrs, to the EC2FullAccess role. When finished, choose the Add button to assign users and groups to the IAM role. You have now successfully granted DevOps and BIMgrs on-premises AD groups full access to EC2. Users in these AD groups can now sign in to AWS Management Console using their on-premises credentials and manage EC2 instances.
From the Add Users and Groups to Roles page, I repeat the process to assign the remaining groups to the IAM roles. In the following screenshot, you can see that I have assigned the DevOps group to three roles and the BIMgrs group to only one role.
With my AD security groups assigned to my IAM roles, I can now add and delete on-premises users to the security groups to grant or revoke permissions to the IAM roles. Users in these security groups have access to all of their assigned roles.
You can optionally set the login session length for your AWS Microsoft AD directory. The default length is 1 hour, but you can increase it up to 12 hours. In my example, I set the console session time to 240 minutes (4 hours).
Step 4 – Connect to the AWS Management Console
I am now ready for my users to sign in to the AWS Management Console with their on-premises credentials. I emailed my users the access URL I created in Step 2: https://example-corp.awsapps.com/console. Now my users can go to the URL to sign in to the AWS Management Console.
When Mary, who is a member of DevOps group, goes to the access URL, she sees a sign-in page to connect to the AWS Management Console. In the Username box, she can enter her sign-in name in three different ways:
Specify her on-premises NetBIOS login name (corp\mary).
Specify her fully qualified domain name (FQDN) login name (amazondomains.com\mary).
Because the DevOps group is associated with three IAM roles, and because Mary is in the DevOps group, she can choose the role she wants from the list presented after she successfully logs in. The following screenshot shows this step.
AWS Microsoft AD makes it easier for you to connect to the AWS Management Console by using your on-premises credentials. It also enables you to reuse your on-premises AD security policies such as password expiration, password history, and account lockout policies while still controlling access to AWS resources.
AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as Microsoft AD, now enables your users to log on with just their on-premises Active Directory (AD) user name—no domain name is required. This new domainlesslogon feature makes it easier to set up connections to your on-premises AD for use with applications such as Amazon WorkSpaces and Amazon QuickSight, and it keeps the user logon experience free from network naming. This new interforest trusts capability is now available when using Microsoft AD with Amazon WorkSpaces and Amazon QuickSight Enterprise Edition.
In this blog post, I explain how Microsoft AD domainless logon works with AD interforest trusts, and I show an example of setting up Amazon WorkSpaces to use this capability.
To follow along, you must have already implemented an on-premises AD infrastructure. You will also need to have an AWS account with an Amazon Virtual Private Cloud (Amazon VPC). I start with some basic concepts to explain domainless logon. If you have prior knowledge of AD domain names, NetBIOS names, logon names, and AD trusts, you can skip the following “Concepts” section and move ahead to the “Interforest Trust with Domainless Logon” section.
Concepts: AD domain names, NetBIOS names, logon names, and AD trusts
AD directories are distributed hierarchical databases that run on one or more domain controllers. AD directories comprise a forest that contains one or more domains. Each forest has a root domain and a global catalog that runs on at least one domain controller. Optionally, a forest may contain child domains as a way to organize and delegate administration of objects. The domains contain user accounts each with a logon name. Domains also contain objects such as groups, computers, and policies; however, these are outside the scope of this blog post. When child domains exist in a forest, root domains are frequently unused for user accounts. The global catalog contains a list of all user accounts for all domains within the forest, similar to a searchable phonebook listing of all domain accounts. The following diagram illustrates the basic structure and naming of a forest for the company example.com.
Domain names
AD domains are Domain Name Service (DNS) names, and domain names are used to locate user accounts and other objects in the directory. A forest has one root domain, and its name consists of a prefix name and a suffix name. Often administrators configure their forest suffix to be the registered DNS name for their organization (for example, example.com) and the prefix is a name associated with their forest root domain (for example, us). Child domain names consist of a prefix followed by the root domain name. For example, let’s say you have a root domain us.example.com, and you created a child domain for your sales organization with a prefix of sales. The FQDN is the domain prefix of the child domain combined with the root domain prefix and the organization suffix, all separated by periods (“.”). In this example, the FQDN for the sales domain is sales.us.example.com.
NetBIOS names
NetBIOS is a legacy application programming interface (API) that worked over network protocols. NetBIOS names were used to locate services in the network and, for compatibility with legacy applications, AD associates a NetBIOS name with each domain in the directory. Today, NetBIOS names continue to be used as simplified names to find user accounts and services that are managed within AD and must be unique within the forest and any trusted forests (see “Interforest trusts” section that follows). NetBIOS names must be 15 or fewer characters long.
For this post, I have chosen the following strategy to ensure that my NetBIOS names are unique across all domains and all forests. For my root domain, I concatenate the root domain prefix with the forest suffix, without the .com and without the periods. In this case, usexample is the NetBIOS name for my root domain us.example.com. For my child domains, I concatenate the child domain prefix with the root domain prefix without periods. This results in salesus as the NetBIOS name for the child domain sales.us.example.com. For my example, I can use the NetBIOS name salesus instead of the FQDN sales.us.example.com when searching for users in the sales domain.
Logon names
Logon names are used to log on to Active Directory and must be 20 or fewer characters long (for example, jsmith or dadams). Logon names must be unique within a domain, but they do not have to be unique between different domains in the same forest. For example, there can be only one dadams in the sales.us.example.com (salesus) domain, but there could also be a dadams in the hr.us.example.com (hrus) domain. When possible, it is a best practice for logon names to be unique across all forests and domains in your AD infrastructure. By doing so, you can typically use the AD logon name as a person’s email name (the local-part of an email address), and your forest suffix as the email domain (for example, [email protected]). This way, end users only have one name to remember for email and logging on to AD. Failure to use unique logon names results in some people having different logon and email names.
For example, let’s say there is a Daryl Adams in hrus with a logon name of dadams and a Dale Adams in salesus with a logon name of dadams. The company is using example.com as its email domain. Because email requires addresses to be unique, you can only have one [email protected] email address. Therefore, you would have to give one of these two people (let’s say Dale Adams) a different email address such as [email protected]. Now Dale has to remember to logon to the network as dadams (the AD logon name) but have an email name of daleadams. If unique user names were assigned instead, Dale could have a logon name of daleadams and an email name of daleadams.
Logging on to AD
To allow AD to find user accounts in the forest during log on, users must include their logon name and the FQDN or the NetBIOS name for the domain where their account is located. Frequently, the computers used by people are joined to the same domain as the user’s account. The Windows desktop logon screen chooses the computer’s domain as the default domain for logon, so users typically only need to type their logon name and password. However, if the computer is joined to a different domain than the user, the user’s FQDN or NetBIOS name are also required.
For example, suppose jsmith has an account in sales.us.example.com, and the domain has a NetBIOS name salesus. Suppose jsmith tries to log on using a shared computer that is in the computers.us.example.com domain with a NetBIOS name of uscomputers. The computer defaults the logon domain to uscomputers, but jsmith does not exist in the uscomputers domain. Therefore, jsmith must type her logon name and her FQDN or NetBIOS name in the user name field of the Windows logon screen. Windows supports multiple syntaxes to do this including NetBIOS\username (salesus\jsmith) and FQDN\username (sales.us.com\jsmith).
Interforest trusts
Most organizations have a single AD forest in which to manage user accounts, computers, printers, services, and other objects. Within a single forest, AD uses a transitive trust between all of its domains. A transitive trust means that within a trust, domains trust users, computers, and services that exist in other domains in the same forest. For example, a printer in printers.us.example.com trusts sales.us.example.com\jsmith. As long as jsmith is given permissions to do so, jsmith can use the printer in printers.us.example.com.
An organization at times might need two or more forests. When multiple forests are used, it is often desirable to allow a user in one forest to access a resource, such as a web application, in a different forest. However, trusts do not work between forests unless the administrators of the two forests agree to set up a trust.
For example, suppose a company that has a root domain of us.example.com has another forest in the EU with a root domain of eu.example.com. The company wants to let users from both forests share the same printers to accommodate employees who travel between locations. By creating an interforest trust between the two forests, this can be accomplished. In the following diagram, I illustrate that us.example.com trusts users from eu.example.com, and the forest eu.example.com trusts users from us.example.com through a two-way forest trust.
In rare cases, an organization may require three or more forests. Unlike domain trusts within a single forest, interforest trusts are not transitive. That means, for example, that if the forest us.example.com trusts eu.example.com, and eu.example.com trusts jp.example.com, us.example.com does not automatically trust jp.example.com. For us.example.com to trust jp.example.com, an explicit, separate trust must be created between these two forests.
When setting up trusts, there is a notion of trust direction. The direction of the trust determines which forest is trusting and which forest is trusted. In a one-way trust, one forest is the trusting forest, and the other is the trusted forest. The direction of the trust is from the trusting forest to the trusted forest. A two-way trust is simply two one-way trusts going in opposite directions; in this case, both forests are both trusting and trusted.
Microsoft Windows and AD use an authentication technology called Kerberos. After a user logs on to AD, Kerberos gives the user’s Windows account a Kerberos ticket that can be used to access services. Within a forest, the ticket can be presented to services such as web applications to prove who the user is, without the user providing a logon name and password again. Without a trust, the Kerberos ticket from one forest will not be honored in a different forest. In a trust, the trusting forest agrees to trust users who have logged on to the trusted forest, by trusting the Kerberos ticket from the trusted forest. With a trust, the user account associated with the Kerberos ticket can access services in the trusting forest if the user account has been granted permissions to use the resource in the trusting forest.
Interforest Trust with Domainless Logon
For many users, remembering domain names or NetBIOS names has been a source of numerous technical support calls. With the new updates to Microsoft AD, AWS applications such as Amazon WorkSpaces can be updated to support domainless logon through interforest trusts between Microsoft AD and your on-premises AD. Domainless logon eliminates the need for people to enter a domain name or a NetBIOS name to log on if their logon name is unique across all forests and all domains.
As described in the “Concepts” section earlier in this post, AD authentication requires a logon name to be presented with an FQDN or NetBIOS name. If AD does not receive an FQDN or NetBIOS name, it cannot find the user account in the forest. Windows can partially hide domain details from users if the Windows computer is joined to the same domain in which the user’s account is located. For example, if jsmith in salesus uses a computer that is joined to the sales.us.example.com domain, jsmith does not have to remember her domain name or NetBIOS name. Instead, Windows uses the domain of the computer as the default domain to try when jsmith enters only her logon name. However, if jsmith is using a shared computer that is joined to the computers.us.example.com domain, jsmith must log on by specifying her domain of sales.us.example.com or her NetBIOS name salesus.
With domainless logon, Microsoft AD takes advantage of global catalogs, and because most user names are unique across an entire organization, the need for an FQDN or NetBIOS name for most users to log on is eliminated.
Let’s look at how domainless logon works.
AWS applications that use Directory Service use a similar AWS logon page and identical logon process. Unlike a Windows computer joined to a domain, the AWS logon page is associated with a Directory Service directory, but it is not associated with any particular domain. When Microsoft AD is used, the User name field of the logon page accepts an FQDN\logon name, NetBIOS\logon name, or just a logon name. For example, the logon screen will accept sales.us.example.com\jsmith, salesus\jsmith, or jsmith.
In the following example, the company example.com has a forest in the US and EU, and one in AWS using Microsoft AD. To make NetBIOS names unique, I use my naming strategy described earlier in the section “NetBIOS names.” For the US root domain, the FQDN is us.example.com,and the NetBIOS name is usexample. For the EU, the FQDN is eu.example.com and the NetBIOS is euexample. For AWS, the FQDN is aws.example.com and the NetBIOS awsexample. Continuing with my naming strategy, my unique child domains have the NetBIOS names salesus, hrus, saleseu, hreu. Each of the forests has a global catalog that lists all users from all domains within the forest. The following graphic illustrates the forest configuration.
As shown in the preceding diagram, the global catalog for the US forest contains a jsmith in sales and dadams in hr. For the EU, there is a dadams in sales and a tpella in hr, and the AWS forest has a bharvey. The users shown in green type (jsmith, tpella, and bharvey) have unique names across all forests in the trust and qualify for domainless logon. The two dadams shown in red do not qualify for domainless logon because the user name is not unique across all trusted forests.
As shown in the following diagram, when a user types in only a logon name (such as jsmith or dadams) without an FQDN or NetBIOS name, domainless logon simultaneously searches for a matching logon name in the global catalogs of the Microsoft AD forest (aws.example.com) and all trusted forests (us.example.com and eu.example.com). For jsmith, the domainless logon finds a single user account that matches the logon name in sales.us.example.com and adds the domain to the logon name before authenticating. If no accounts match the logon name, authentication fails before attempting to authenticate. If dadams in the EU attempts to use only his logon name, domainless logon finds two dadams users, one in hr.us.example.com and one in sales.eu.example.com. This ambiguity prevents domainless logon. To log on, dadams must provide his FQDN or NetBIOS name (in other words, sales.eu.example.com\dadams or saleseu\dadams).
Upon successful logon, the logon page caches in a cookie the logon name and domain that were used. In subsequent logons, the end user does not have to type anything except their password. Also, because the domain is cached, the global catalogs do not need to be searched on subsequent logons. This minimizes global catalog searching, maximizes logon performance, and eliminates the need for users to remember domains (in most cases).
To maximize security associated with domainless logon, all authentication failures result in an identical failure notification that tells the user to check their domain name, user name, and password before trying again. This prevents hackers from using error codes or failure messages to glean information about logon names and domains in your AD directory.
If you follow best practices so that all user names are unique across all domains and all forests, domainless logon eliminates the requirement for your users to remember their FQDN or NetBIOS name to log on. This simplifies the logon experience for end users and can reduce your technical support resources that you use currently to help end users with logging on.
Solution overview
In this example of domainless logon, I show how Amazon WorkSpaces can use your existing on-premises AD user accounts through Microsoft AD. This example requires:
An AWS account with an Amazon VPC.
An AWS Microsoft AD directory in your Amazon VPC.
An existing AD deployment in your on-premises network.
A secured network connection from your on-premises network to your Amazon VPC.
A two-way AD trust between your Microsoft AD and your on-premises AD.
I configure Amazon WorkSpaces to use a Microsoft AD directory that exists in the same Amazon VPC. The Microsoft AD directory is configured to have a two-way trust to the on-premises AD. Amazon WorkSpaces uses Microsoft AD and the two-way trust to find users in your on-premises AD and create Amazon WorkSpaces instances. After the instances are created, I send end users an invitation to use their Amazon WorkSpaces. The invitation includes a link for them to complete their configuration and a link to download an Amazon WorkSpaces client to their directory. When the user logs in to their Amazon WorkSpaces account, the user specifies the login name and password for their on-premises AD user account. Through the two-way trust between Microsoft AD and the on-premises AD, the user is authenticated and gains access to their Amazon WorkSpaces desktop.
Getting started
Now that we have covered how the pieces fit together and you understand how FQDN, NetBIOS, and logon names are used, let’s walk through the steps to use Microsoft AD with domainless logon to your on-premises AD for Amazon WorkSpaces.
Step 1 – Set up your Microsoft AD in your Amazon VPC
If you already have a Microsoft AD directory running, skip to Step 2. If you do not have a Microsoft AD directory to use with Amazon WorkSpaces, you can create the directory in the Directory Service console and attach to it from the Amazon WorkSpaces console, or you can create the directory within the Amazon WorkSpaces console.
To create the directory from Amazon WorkSpaces (as shown in the following screenshot):
Under Security & Identity, choose Directory Service.
Choose Get Started Now.
Choose Create Microsoft AD.
In this example, I use example.com as my organization name. The Directory DNS is the FQDN for the root domain, and it is aws.example.com in this example. For my NetBIOS name, I follow the naming model I showed earlier and use awsexample. Note that the Organization Name shown in the following screenshot is required only when creating a directory from Amazon WorkSpaces; it is not required when you create a Microsoft AD directory from the AWS Directory Service workflow.
For more details about Microsoft AD creation, review the steps in AWS Directory Service for Microsoft Active Directory (Enterprise Edition). After entering the required parameters, it may take up to 40 minutes for the directory to become active so that you might want to exit the console and come back later.
Locate the Microsoft AD directory to use with Amazon WorkSpaces and choose its Directory ID link (as highlighted in the following screenshot).
Choose the Trust relationships tab for the directory and follow the steps in Create a Trust Relationship (Microsoft AD) to create the trust relationships between your Microsoft AD and your on-premises domains.
Locate and select the Microsoft AD directory that you set up in Steps 1 and 2.
If the Registered status for the directory says No, open the Actions menu and choose Register.
Wait until the Registered status changes to Yes. The status change should take only a few seconds.
Choose the WorkSpaces in the left pane.
Choose Launch WorkSpaces.
Select the Microsoft AD directory that you set up in Steps 1 and 2 and choose Next Step.
In the Select Users from Directory section, type a partial or full logon name, email address, or user name for an on-premises user for whom you want to create an Amazon WorkSpace and choose Search. The returned list of users should be the users from your on-premises AD forest.
In the returned results, scroll through the list and select the users for whom to create an Amazon WorkSpace and choose Add Selected. You may repeat the search and select processes until up to 20 users appear in the Amazon WorkSpaces list at the bottom of the screen. When finished, choose Next Step.
Select a bundle to be used for the Amazon WorkSpaces you are creating and choose Next Step.
Choose the Running Mode, Encryption settings, and configure any Tags. Choose Next Step.
Review the configuration of the Amazon WorkSpaces and click Launch WorkSpaces. It may take up to 20 minutes for the Amazon WorkSpaces to be available.
Step 4 – Invite the users to log in to their Amazon Workspaces
From the AWS Management Console, choose WorkSpaces from the Desktop & App Streaming section.
Choose the WorkSpaces menu item in the left pane.
Select the Amazon WorkSpaces you created in Step 3. Then choose the Actions menu and choose Invite User. A login email is sent to the users.
Copy the text from the Invite screen, then paste the text into an email to the user.
Step 5 – Users log in to their Amazon WorkSpace
The users receive their Amazon WorkSpaces invitations in email and follow the instructions to launch the Amazon WorkSpaces login screen.
Each user enters their user name and password.
After a successful login, future Amazon WorkSpaces logins from the same computer will present what the user last typed on the login screen. The user only needs to provide their password to complete the login. If only a login name were provided by the user in the last successful login, the domain for the user account is silently added to the subsequent login attempt.
They all largely say the same things: avoid known vulnerabilities, don’t have insecure defaults, make your systems patchable, and so on.
My guess is that everyone knows that IoT regulation is coming, and is either trying to impose self-regulation to forestall government action or establish principles to influence government action. It’ll be interesting to see how the next few years unfold.
If there are any IoT security or privacy guideline documents that I’m missing, please tell me in the comments.
Our customers have traditionally used directories (typically Active Directory Lightweight Directory Service or LDAP-based) to manage hierarchically organized data. Device registries, course catalogs, network configurations, and user directories are often represented as hierarchies, sometimes with multiple types of relationships between objects in the same collection. For example, a user directory could have one hierarchy based on physical location (country, state, city, building, floor, and office), a second one based on projects and billing codes, and a third based on the management chain. However, traditional directory technologies do not support the use of multiple relationships in a single directory; you’d have to create and maintain additional directories if you needed to do this.
Scale is another important challenge. The fundamental operations on a hierarchy involve locating the parent or the child object of a given object. Given that hierarchies can be used to represent large, nested collections of information, these fundamental operations must be as efficient as possible, regardless of how many objects there are or how deeply they are nested. Traditional directories can be difficult to scale, and the pain only grows if you are using two or more in order to represent multiple hierarchies.
New Amazon Cloud Directory Today we are launching Cloud Directory. This service is purpose-built for storing large amounts of strongly typed hierarchical data as described above. With the ability to scale to hundreds of millions of objects while remaining cost-effective, Cloud Directory is a great fit for all sorts of cloud and mobile applications.
Cloud Directory is a building block that already powers other AWS services including Amazon Cognito and AWS Organizations. Because it plays such a crucial role within AWS, it was designed with scalability, high availability, and security in mind (data is encrypted at rest and while in transit).
Amazon Cloud Directory is a managed service; you don’t need to think about installing or patching software, managing servers, or scaling any storage or compute infrastructure. You simply define the schemas, create a directory, and then populate your directory by making calls to the Cloud Directory API. This API is designed for speed and for scale, with efficient, batch-based read and write functions.
The long-lasting nature of a directory, combined with the scale and the diversity of use cases that it must support over its lifetime, brings another challenge to light. Experience has shown that static schemas lack the flexibility to adapt to the changes that arise with scale and with new use cases. In order to address this challenge and to make the directory future-proof, Cloud Directory is built around a model that explicitly makes room for change. You simply extend your existing schemas by adding new facets. This is a safe operation that leaves existing data intact so that existing applications will continue to work as expected. Combining schemas and facets allows you to represent multiple hierarchies within the same directory. For example, your first hierarchy could mirror your org chart. Later, you could add an additional facet to track some additional properties for each employee, perhaps a second phone number or a social network handle. After that, you can could create a geographically oriented hierarchy within the same data: Countries, states, buildings, floors, offices, and employees.
As I mentioned, other parts of AWS already use Amazon Cloud Directory. Cognito User Pools use Cloud Directory to offer application-specific user directories with support for user sign-up, sign-in and multi-factor authentication. With Cognito Your User Pools, you can easily and securely add sign-up and sign-in functionality to your mobile and web apps with a fully-managed service that scales to support hundreds of millions of users. Similarly, AWS Organizations uses Cloud Directory to support creation of groups of related AWS accounts and makes good use of multiple hierarchies to enforce a wide range of policies.
Before we dive in, let’s take a quick look at some important Amazon Cloud Directory concepts:
Directories are named, and must have at least one schema. Directories store objects, relationships between objects, schemas, and policies.
Facets model the data by defining required and allowable attributes. Each facet provides an independent scope for attribute names; this allows multiple applications that share a directory to safely and independently extend a given schema without fear of collision or confusion.
Schemas define the “shape” of data stored in a directory by making reference to one or more facets. Each directory can have one or more schemas. Schemas exist in one of three phases (Development, Published, or Applied). Development schemas can be modified; Published schemas are immutable. Amazon Cloud Directory includes a collection of predefined schemas for people, organizations, and devices. The combination of schemas and facets leaves the door open to significant additions to the initial data model and subject area over time, while ensuring that existing applications will still work as expected.
Attributes are the actual stored data. Each attribute is named and typed; data types include Boolean, binary (blob), date/time, number, and string. Attributes can be mandatory or optional, and immutable or editable. The definition of an attribute can specify a rule that is used to validate the length and/or content of an attribute before it is stored or updated. Binary and string objects can be length-checked against minimum and maximum lengths. A rule can indicate that a string must have a value chosen from a list, or that a number is within a given range.
Objects are stored in directories, have attributes, and are defined by a schema. Each object can have multiple children and multiple parents, as specified by the schema. You can use the multiple-parent feature to create multiple, independent hierarchies within a single directory (sometimes known as a forest of trees).
Policies can be specified at any level of the hierarchy, and are inherited by child objects. Cloud Directory does not interpret or assign any meaning to policies, leaving this up to the application. Policies can be used to specify and control access permissions, user rights, device characteristics, and so forth.
Creating a Directory Let’s create a directory! I start by opening up the AWS Directory Service Console and clicking on the first Create directory button:
I enter a name for my directory (users), choose the person schema (which happens to have two facets; more about this in a moment), and click on Next:
The predefined (AWS) schema will be copied to my directory. I give it a name and a version, and click on Publish:
Then I review the configuration and click on Launch:
The directory is created, and I can then write code to add objects to it.
Pricing and Availability Cloud Directory is available now in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Sydney), and Asia Pacific (Singapore) Regions and you can start using it today.
Pricing is based on three factors: the amount of data that you store, the number of reads, and the number of writes (these prices are for US East (Northern Virginia)):
While the priorities can change due to customer feedback, we are working on cross-region replication, AWS Lambda integration, and the ability to create new directories via AWS CloudFormation.
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.