Today we are announcing additional automation features inside of AWS Systems Manager. If you haven’t used Systems Manager yet, it’s a service that provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources.
With this new release, it just got even more powerful. We have added additional capabilities to AWS Systems Manager that enables you to build, run, and share automations with others on your team or inside your organisation — making managing your infrastructure more repeatable and less error-prone.
Inside the AWS Systems Manager console on the navigation menu, there is an item called Automation if I click this menu item I will see the Execute automation button.
When I click on this I am asked what document I want to run. AWS provides a library of documents that I could choose from, however today, I am going to build my own so I will click on the Create document button.
This takes me to a a new screen that allows me to create a document (sometimes referred to as an automation playbook) that amongst other things executes Python or PowerShell scripts.
The console gives me two options for editing a document: A YAML editor or the “Builder” tool that provides a guided, step-by-step user interface with the ability to include documentation for each workflow step.
So, let’s take a look by building and running a simple automation. When I create a document using the Builder tool, the first thing required is a document name.
Next, I need to provide a description. As you can see below, I’m able to use Markdown to format the description. The description is an excellent opportunity to describe what your document does, this is valuable since most users will want to share these documents with others on their team and build a library of documents to solve everyday problems.
Optionally, I am asked to provide parameters for my document. These parameters can be used in all of the scripts that you will create later. In my example, I have created three parameters: imageId, tagValue, and instanceType. When I come to execute this document, I will have the opportunity to provide values for these parameters that will override any defaults that I set.
When someone executes my document, the scripts that are executed will interact with AWS services. A document runs with the user permissions for most of its actions along with the option of providing an Assume Role. However, for documents with the Run a Script action, the role is required when the script is calling any AWS API.
You can set the Assume role globally in the builder tool; however, I like to add a parameter called assumeRole to my document, this gives anyone that is executing it the ability to provide a different one.
You then wire this parameter up to the global assumeRole by using the {{assumeRole}}syntax in the Assume role property textbox (I have called my parameter name assumeRole but you could call it what you like, just make sure that the name you give the parameter is what you put in the double parentheses syntax e.g.{{yourParamName}}).
Once my document is set up, I then need to create the first step of my document. Your document can contain 1 or more steps, and you can create sophisticated workflows with branching, for example based on a parameter or failure of a step. Still, in this example, I am going to create three steps that execute one after another. Again you need to give the step a name and a description. This description can also include markdown. You need to select an Action Type, for this example I will choose Run a script.
With the ‘Run a script’ action type, I get to run a script in Python or PowerShell without requiring any infrastructure to run the script. It’s important to realise that this script will not be running on one of your EC2 instances. The scripts run in a managed compute environment. You can configure a Amazon CloudWatch log group on the preferences page to send outputs to a CloudWatch log group of your choice.
In this demo, I write some Python that creates an EC2 instance. You will notice that this script is using the AWS SDK for Python. I create an instance based upon an image_id, tag_value, and instance_type that are passed in as parameters to the script.
To pass parameters into the script, in the Additional Inputs section, I select InputPayload as the input type. I then use a particular YAML format in the Input Value text box to wire up the global parameters to the parameters that I am going to use in the script. You will notice that again I have used the double parentheses syntax to reference the global parameters e.g. {{imageId}}
In the Outputs section, I also wire up an output parameter than can be used by subsequent steps.
Next, I will add a second step to my document . This time I will poll the instance to see if its status has switched to ok. The exciting thing about this code is the InstanceId, is passed into the script from a previous step. This is an example of how the execution steps can be chained together to use outputs of earlier steps.
def poll_instance(events, context):
import boto3
import time
ec2 = boto3.client('ec2')
instance_id = events['InstanceId']
print('[INFO] Waiting for instance to enter Status: Ok', instance_id)
instance_status = "null"
while True:
res = ec2.describe_instance_status(InstanceIds=[instance_id])
if len(res['InstanceStatuses']) == 0:
print("Instance Status Info is not available yet")
time.sleep(5)
continue
instance_status = res['InstanceStatuses'][0]['InstanceStatus']['Status']
print('[INFO] Polling get status of the instance', instance_status)
if instance_status == 'ok':
break
time.sleep(10)
return {'Status': instance_status, 'InstanceId': instance_id}
To pass the parameters into the second step, notice that I use the double parentheses syntax to reference the output of a previous step. The value in the Input value textbox {{launchEc2Instance.payload}} is the name of the step launchEc2Instance and then the name of the output parameter payload.
Lastly, I will add a final step. This step will run a PowerShell script and use the AWS Tools for PowerShell. I’ve added this step purely to show that you can use PowerShell as an alternative to Python.
You will note on the first line that I have to Install the AWSPowerShell.NetCore module and use the -Force switch before I can start interacting with AWS services.
All this step does is take the InstanceId output from the LaunchEc2Instance step and use it to return the InstanceType of the ECS instance.
It’s important to note that I have to pass the parameters from LaunchEc2Instance step to this step by configuring the Additional inputs in the same way I did earlier.
Now that our document is created we can execute it. I go to the Actions & Change section of the menu and select Automation, from this screen, I click on the Execute automation button. I then get to choose the document I want to execute. Since this is a document I created, I can find it on the Owned by me tab.
If I click the LaunchInstance document that I created earlier, I get a document details screen that shows me the description I added. This nicely formatted description allows me to generate documentation for my document and enable others to understand what it is trying to achieve.
When I click Next, I am asked to provide any Input parameters for my document. I add the imageId and ARN for the role that I want to use when executing this automation. It’s important to remember that this role will need to have permissions to call any of the services that are requested by the scripts. In my example, that means it needs to be able to create EC2 instances.
Once the document executes, I am taken to a screen that shows the steps of the document and gives me details about how long each step took and respective success or failure of each step. I can also drill down into each step and examine the logs. As you can see, all three steps of my document completed successfully, and if I go to the Amazon Elastic Compute Cloud (EC2) console, I will now have an EC2 instance that I created with tag LaunchedBySsmAutomation.
These new features can be found today in all regions inside the AWS Systems Manager console so you can start using them straight away.
Thanks to Greg Eppel, Sr. Solutions Architect, Microsoft Platform for this great blog that describes how to create a custom CodeBuild build environment for the .NET Framework. — AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. CodeBuild provides curated build environments for programming languages and runtimes such as Android, Go, Java, Node.js, PHP, Python, Ruby, and Docker. CodeBuild now supports builds for the Microsoft Windows Server platform, including a prepackaged build environment for .NET Core on Windows. If your application uses the .NET Framework, you will need to use a custom Docker image to create a custom build environment that includes the Microsoft proprietary Framework Class Libraries. For information about why this step is required, see our FAQs. In this post, I’ll show you how to create a custom build environment for .NET Framework applications and walk you through the steps to configure CodeBuild to use this environment.
Build environments are Docker images that include a complete file system with everything required to build and test your project. To use a custom build environment in a CodeBuild project, you build a container image for your platform that contains your build tools, push it to a Docker container registry such as Amazon Elastic Container Registry (Amazon ECR), and reference it in the project configuration. When it builds your application, CodeBuild retrieves the Docker image from the container registry specified in the project configuration and uses the environment to compile your source code, run your tests, and package your application.
Step 1: Launch EC2 Windows Server 2016 with Containers
In the Amazon EC2 console, in your region, launch an Amazon EC2 instance from a Microsoft Windows Server 2016 Base with Containers AMI.
Increase disk space on the boot volume to at least 50 GB to account for the larger size of containers required to install and run Visual Studio Build Tools.
Run the following command in that directory. This process can take a while. It depends on the size of EC2 instance you launched. In my tests, a t2.2xlarge takes less than 30 minutes to build the image and produces an approximately 15 GB image.
docker build -t buildtools2017:latest -m 2GB .
Run the following command to test the container and start a command shell with all the developer environment variables:
docker run -it buildtools2017
Create a repository in the Amazon ECS console. For the repository name, type buildtools2017. Choose Next step and then complete the remaining steps.
Execute the following command to generate authentication details for our registry to the local Docker engine. Make sure you have permissions to the Amazon ECR registry before you execute the command.
aws ecr get-login
In the same command prompt window, copy and paste the following commands:
In the CodeCommit console, create a repository named DotNetFrameworkSampleApp. On the Configure email notifications page, choose Skip.
Clone a .NET Framework Docker sample application from GitHub. The repository includes a sample ASP.NET Framework that we’ll use to demonstrate our custom build environment.On the EC2 instance, open a command prompt and execute the following commands:
Navigate to the CodeCommit repository and confirm that the files you just pushed are there.
Step 4: Configure build spec
To build your .NET Framework application with CodeBuild you use a build spec, which is a collection of build commands and related settings, in YAML format, that AWS CodeBuild can use to run a build. You can include a build spec as part of the source code or you can define a build spec when you create a build project. In this example, I include a build spec as part of the source code.
In the root directory of your source directory, create a YAML file named buildspec.yml.
At this point, we have a Docker image with Visual Studio Build Tools installed and stored in the Amazon ECR registry. We also have a sample ASP.NET Framework application in a CodeCommit repository. Now we are going to set up CodeBuild to build the ASP.NET Framework application.
In the Amazon ECR console, choose the repository that was pushed earlier with the docker push command. On the Permissions tab, choose Add.
For Source Provider, choose AWS CodeCommit and then choose the called DotNetFrameworkSampleApp repository.
For Environment Image, choose Specify a Docker image.
For Environment type, choose Windows.
For Custom image type, choose Amazon ECR.
For Amazon ECR repository, choose the Docker image with the Visual Studio Build Tools installed, buildtools2017. Your configuration should look like the image below:
Choose Continue and then Save and Build to create your CodeBuild project and start your first build. You can monitor the status of the build in the console. You can also configure notifications that will notify subscribers whenever builds succeed, fail, go from one phase to another, or any combination of these events.
Summary
CodeBuild supports a number of platforms and languages out of the box. By using custom build environments, it can be extended to other runtimes. In this post, I showed you how to build a .NET Framework environment on a Windows container and demonstrated how to use it to build .NET Framework applications in CodeBuild.
We’re excited to see how customers extend and use CodeBuild to enable continuous integration and continuous delivery for their Windows applications. Feel free to share what you’ve learned extending CodeBuild for your own projects. Just leave questions or suggestions in the comments.
I’m Charles Fort, a developer at Woot who specializes in deployments and developer experience. Woot is the original daily deals website. It was founded in 2004 and acquired by Amazon in 2010 – https://www.woot.com
We just moved our web front-end deployments from Troop, a deployment agent we developed ourselves, to AWS CodeDeploy and AWS CodePipeline. This migration involved launching a new customer-facing EC2 web server fleet with the CodeDeploy agent installed and creating and managing our CodeDeploy and CodePipeline resources with AWS CloudFormation. Immediately after we completed the migration, we observed a ~50% reduction in HTTP 500 errors during deployment.
In this blog post, I’m going to show you:
Why we chose AWS deployment tools.
An architectural diagram of Woot’s systems.
An overview of the migration project.
Our migration results.
The old and busted
We wanted to replace our in-house deployment system with something we could build automation on top of, and something that we didn’t have to own or maintain. We already own and maintain a build system, which is bad enough. We didn’t want to run additional infrastructure for our deployment pipeline.
In short, we wanted a cloud service. Because all of our infrastructure is in AWS, CodeDeploy was a natural fit to replace our deployment agent. CodePipeline acts as the automation orchestrator, our wizard in the cloud telling CodeDeploy what to deploy and when.
Architecture overview
Here’s a look at the architecture of a Woot web front end:
Woot architecture overview
Project overview
Our project involved migrating five web front ends, which together handle an average of 12 million requests per day, to CodeDeploy and CodePipeline while keeping the site live for our customers.
Here are the steps we took:
Wrote some new deployment scripts.
Launched a new fleet of EC2 web servers with CodeDeploy support.
Created a deployment pipeline for our CloudFormation-defined CodeDeploy and CodePipeline configuration.
Introduced our new fleet to live traffic. Hello, customers!
Deployment scripts
Our old deployment system didn’t stop or start our web server. Instead, it tried to swap out the build artifacts from under the server while the server was running. That’s definitely a bad idea.
We wrote some deployment scripts in Powershell that are run by CodeDeploy to stop and start our IIS web servers. These scripts work in conjunction with the Elastic Load Balancing (ELB) support in CodeDeploy because we certainly didn’t want to stop the web server while it’s serving customer traffic.
New fleet
Because our fleet is running on Amazon EC2, we built an Amazon Machine Image (AMI) for our web fleet with the CodeDeploy agent already installed. From a fleet perspective, this was most of what we had to do. With the agent installed, CodeDeploy can use our deployment scripts to deploy our web projects.
AWS CloudFormation deployment pipeline
Because we needed a deployment pipeline and several pieces of CodeDeploy configuration (a CodeDeploy application and at least one CodeDeploy deployment group) for each web project we want to deploy, we decided to use AWS CloudFormation to version this configuration. Our build system (TeamCity) can read from our version control system and write to Amazon S3. We made a simple build in TeamCity to push an AWS CloudFormation template to S3, which triggers a pipeline that deploys to AWS CloudFormation. This creates the CodePipeline and CodeDeploy resources. Now we can do code reviews on our infrastructure changes. More eyes is more safety! We can also trace infrastructure changes over time, just like we can with code changes.
Live traffic time!
Our web fleets run behind Classic Load Balancers. By choosing CodeDeploy, we can use its sweet new ELB features. For example, through an elastic load balancer, CodeDeploy can prevent internet traffic from being routed to an instance while it is being deployed to. After the deployment to that instance is complete, it then makes the instance available for traffic.
We launched new hosts with the CodeDeploy agent and deployed to them without ELB support turned on. Then we slowly, manually introduced them into our fleet while monitoring stats. After we had the new machines in, we slowly removed the old ones from the load balancer until our fleet was fully supported by CodeDeploy. Doing the migration this way resulted in 0 downtime for our sites.
One fun detail: When we had 2/3 of the new fleet in our load balancer, we triggered a CodeDeploy deployment to the fleet, but this time with ELB support turned on. This caused CodeDeploy to place the rest of the machines into the load balancer (coexisting with the old fleet), and there were slightly fewer buttons to press.
AWS CloudFormation example
This is a simplified example of the AWS CloudFormation template we use to manage the AWS configuration for one of our web projects. It is deployed in a deployment pipeline, much like the web projects themselves.
Parameters:
CodePipelineBucket:
Type: String
CodePipelineRole:
Type: String
CodeDeployRole:
Type: String
CodeDeployBucket:
Type: String
Resources:
### Woot.Example deployment configuration ###
ExampleDeploymentConfig:
Type: 'AWS::CodeDeploy::DeploymentConfig'
Properties:
MinimumHealthyHosts:
Type: FLEET_PERCENT
Value: '66' # Let's keep 2/3 of the fleet healthy at any point
#Woot.Example CodeDeploy application
WootExampleApplication:
Type: "AWS::CodeDeploy::Application"
Properties:
ApplicationName: "Woot.Example"
#Woot.Example CodeDeploy deployment groups
WootExampleDeploymentGroup:
DependsOn: "WootExampleApplication"
Type: "AWS::CodeDeploy::DeploymentGroup"
Properties:
ApplicationName: "Woot.Example"
DeploymentConfigName: !Ref "ExampleDeploymentConfig" # use the deployment configuration we created
DeploymentGroupName: "Woot.Example.Main"
AutoRollbackConfiguration:
Enabled: true
Events:
- DEPLOYMENT_FAILURE # this makes the deployment rollback when the deployment hits the failure threshold
- DEPLOYMENT_STOP_ON_REQUEST # this makes the deployment rollback if you hit the stop button
LoadBalancerInfo:
ElbInfoList:
- Name: "WootExampleInternal" # this is the ELB the hosts live in, they will be added and removed from here
DeploymentStyle:
DeploymentOption: "WITH_TRAFFIC_CONTROL" # this tells CodeDeploy to actually add/remove the hosts from the ELB
Ec2TagFilters:
-
Key: "Name"
Value: "exampleweb*" # deploy to all machines named like exampleweb001.yourdomain.com, etc
Type: "KEY_AND_VALUE"
ServiceRoleArn: !Sub "arn:aws:iam::${AWS::AccountId}:role/${CodeDeployRole}" # this is the IAM role CodeDeploy in your account should use
#Woot.Example CodePipeline
WootExampleDeploymentPipeline:
DependsOn: "WootExampleDeploymentGroup"
Type: "AWS::CodePipeline::Pipeline"
Properties:
RoleArn: !Sub "arn:aws:iam::${AWS::AccountId}:role/${CodePipelineRole}" # this is the IAM role CodePipeline in your account should use
Name: "Woot.Example" # name of the pipeline
ArtifactStore:
Type: S3
Location: !Ref "CodePipelineBucket" # the bucket CodePipeline uses in your account to shuffle artifacts around
Stages:
-
Name: Source # one S3 source stage
Actions:
-
Name: SourceAction
ActionTypeId:
Category: Source
Owner: AWS
Version: 1
Provider: S3
OutputArtifacts:
-
Name: SourceOutput
Configuration:
S3Bucket: !Ref "CodeDeployBucket" # the S3 bucket your builds go into (needs to be versioned)
S3ObjectKey: "Woot.Example/Woot.Example-Release.zip" # build artifact path for this application
RunOrder: 1
-
Name: Deploy # one deploy stage that triggers the CodeDeploy deployment
Actions:
-
Name: DeployAction
ActionTypeId:
Category: Deploy
Owner: AWS
Version: 1
Provider: CodeDeploy
InputArtifacts:
-
Name: SourceOutput
Configuration:
ApplicationName: "Woot.Example"
DeploymentGroupName: "Woot.Example.Main"
RunOrder: 2
Appspec.yml example
This is the appspec.yml file we use for our main web front end (Woot.Web.Retail). The appspec.yml file tells CodeDeploy where to put files and when to run our deployment scripts.
Because our server-launching infrastructure doesn’t use CodeDeploy for initial placement of the build artifacts, CodeDeploy won’t overwrite the files. (The service has no knowledge of them.) This is both good and bad: good because CodeDeploy won’t overwrite files it didn’t write, and bad because it means we have to have a deployment script like clearWebsiteDeployment.ps1:
# restart script as 64bit powershell if it's 32 bit
if ($PSHOME -like "*SysWOW64*") {
& (Join-Path ($PSHOME -replace "SysWOW64", "SysNative") powershell.exe) -File `
(Join-Path $PSScriptRoot $MyInvocation.MyCommand) @args
Exit $LastExitCode
}
Import-Module WebAdministration
Stop-Website -name $env:APPLICATION_NAME
#sleep for 10 seconds to give IIS a chance to stop
Start-Sleep -s 10
$website = Get-Website -name "*$env:APPLICATION_NAME*"
if ($website.state -ne 'stopped') {
throw "The website cannot be stopped"
}
startWebsite.ps1:
# restart script as 64bit powershell if it's 32 bit
if ($PSHOME -like "*SysWOW64*") {
& (Join-Path ($PSHOME -replace "SysWOW64", "SysNative") powershell.exe) -File `
(Join-Path $PSScriptRoot $MyInvocation.MyCommand) @args
Exit $LastExitCode
}
Import-Module WebAdministration
Start-Website -name $env:APPLICATION_NAME
#sleep for 10 seconds to give IIS a chance to start
Start-Sleep -s 10
$website = Get-Website -name "*$env:APPLICATION_NAME*"
if ($website.state -ne 'started') {
throw "The website cannot be started"
}
You can see we used a CodeDeploy environment variable ($env:APPLICATION_NAME) in our scripts. The name of the CodeDeploy application is also the name of the IIS website. This way, we can use the same deployment scripts for multiple websites.
The new hotness
Now that we’re running CodeDeploy in production we are extremely pleased with the results. Our old deployment agent, Troop, did not give us much control over the way releases went out. Now we can check on a deployment at a per-instance level, and the opportunities for automation are impressive.
After our migration, we saw a 50% reduction in HTTP 500 errors served to customers during deployments. We looked at the one-hour time slices during a deployment and compared the average count before and after our migration to CodeDeploy. These numbers show our old deployment system was hella busted (really broken).
This graph shows summed deployment errors over time and compares CodeDeploy to our legacy in-house deployment agent (Troop).
CodeDeploy vs Troop
We plan to implement a full cross-account release process on AWS deployment tools. We will have a single AWS account with a pipeline that controls CodeDeploy in our various environments, triggering tests and promoting to the next environment as they pass. Building something like that with our own tooling would take a lot of work. Thanks to CodeDeploy, CodePipeline, and AWS CloudFormation for making our lives easier.
I’m still catching up with the last couple of AWS re:Invent launches!
Today I would like to tell you about inter-region VPC peering. You have been able to create peering connections between Virtual Private Clouds (VPCs) in the same AWS Region since early 2014 (read New VPC Peering for the Amazon Virtual Cloud to learn more). Once established, EC2 instances in the peered VPCs can communicate with each other across the peering connection using their private IP addresses, just as if they were on the same network.
At re:Invent we extended the peering model so that it works across AWS Regions. Like the existing model, it also works within the same AWS account or across a pair of accounts. All of the use cases that I listed in my earlier post still apply; you can centralize shared resources in an organization-wide VPC and then peer it with multiple, per-department VPCs. You can also share resources between members of a consortium, conglomerate, or joint venture.
Inter-region VPC peering also allows you to take advantage of the high degree of isolation that exists between AWS Regions while building highly functional applications that span Regions. For example, you can choose geographic locations for your compute and storage resources that will help you to comply with regulatory requirements and other constraints.
Peering Details This feature is currently enabled in the US East (Northern Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) Regions and for IPv4 traffic. You can connect any two VPCs in these Regions, as long as they have distinct, non-overlapping CIDR blocks. This ensures that all of the private IP addresses are unique and allows all of the resources in the pair of VPCs to address each other without the need for any form of network address translation.
Data that passes between VPCs in distinct regions flows across the AWS global network in encrypted form. The data is encrypted in AEAD fashion using a modern algorithm and AWS-supplied keys that are managed and rotated automatically. The same key is used to encrypt traffic for all peering connections; this makes all traffic, regardless of customer, look the same. This anonymity provides additional protection in situations where your inter-VPC traffic is intermittent.
Setting up Inter-Region Peering Here’s how I set up peering between two of my VPCs. I’ll start with a VPC in US East (Northern Virginia) and request peering with a VPC in US East (Ohio). I start by noting the ID (vpc-acd8ccc5) of the VPC in Ohio:
Then I switch to the US East (Northern Virginia) Region, click on Create Peering Connection, and choose to peer with the VPC in Ohio. I enter the Id and click on Create Peering Connection to proceed:
This creates a peering request:
I switch to the other Region and accept the pending request:
Now I need to arrange to route IPv4 traffic between the two VPCs by creating route table entries in each one. I can edit the main route table or one associated with a particular VPC subnet. Here’s how I arrange to route traffic from Virginia to Ohio:
The private DNS names for EC2 instances (ip-10-90-211-18.ec2.internal and the like) will not resolve across a peering connection. If you need to refer to EC2 instances and other AWS resources in other VPCs, consider creating a Private Hosted Zone using Amazon Route 53:
Unlike VPC peering within a single region, you cannot reference security groups across Inter-Region VPC Peering. Also, jumbo frames cannot be send between regions.
We launched some important new EC2 instance types and features at AWS re:Invent. I’ve already told you about the M5, H1, T2 Unlimited and Bare Metal instances, and about Spot features such as Hibernation and the New Pricing Model. Randall told you about the Amazon Time Sync Service. Today I would like to tell you about two of the features that we launched: Spread placement groups and Launch Templates. Both features are available in the EC2 Console and from the EC2 APIs, and can be used in all of the AWS Regions in the “aws” partition:
Launch Templates You can use launch templates to store the instance, network, security, storage, and advanced parameters that you use to launch EC2 instances, and can also include any desired tags. Each template can include any desired subset of the full collection of parameters. You can, for example, define common configuration parameters such as tags or network configurations in a template, and allow the other parameters to be specified as part of the actual launch.
Templates give you the power to set up a consistent launch environment that spans instances launched in On-Demand and Spot form, as well as through EC2 Auto Scaling and as part of a Spot Fleet. You can use them to implement organization-wide standards and to enforce best practices, and you can give your IAM users the ability to launch instances via templates while withholding the ability to do so via the underlying APIs.
Templates are versioned and you can use any desired version when you launch an instance. You can create templates from scratch, base them on the previous version, or copy the parameters from a running instance.
Here’s how you create a launch template in the Console:
Here’s how to include network interfaces, storage volumes, tags, and security groups:
And here’s how to specify advanced and specialized parameters:
You don’t have to specify values for all of these parameters in your templates; enter the values that are common to multiple instances or launches and specify the rest at launch time.
When you click Create launch template, the template is created and can be used to launch On-Demand instances, create Auto Scaling Groups, and create Spot Fleets:
The Launch Instance button now gives you the option to launch from a template:
Simply choose the template and the version, and finalize all of the launch parameters:
You can also manage your templates and template versions from the Console:
Spread Placement Groups Spread placement groups indicate that you do not want the instances in the group to share the same underlying hardware. Applications that rely on a small number of critical instances can launch them in a spread placement group to reduce the odds that one hardware failure will impact more than one instance. Here are a couple of things to keep in mind when you use spread placement groups:
Availability Zones – A single spread placement group can span multiple Availability Zones. You can have a maximum of seven running instances per Availability Zone per group.
Unique Hardware – Launch requests can fail if there is insufficient unique hardware available. The situation changes over time as overall usage changes and as we add additional hardware; you can retry failed requests at a later time.
Instance Types – You can launch a wide variety of M4, M5, C3, R3, R4, X1, X1e, D2, H1, I2, I3, HS1, F1, G2, G3, P2, and P3 instances types in spread placement groups.
Reserved Instances – Instances launched into a spread placement group can make use of reserved capacity. However, you cannot currently reserve capacity for a placement group and could receive an ICE (Insufficient Capacity Error) even if you have some RI’s available.
Applicability – You cannot use spread placement groups in conjunction with Dedicated Instances or Dedicated Hosts.
The following 20 pages were the most viewed AWS Identity and Access Management (IAM) documentation pages in 2017. I have included a brief description with each link to explain what each page covers. Use this list to see what other AWS customers have been viewing and perhaps to pique your own interest in a topic you’ve been meaning to learn about.
What Is IAM? Learn more about IAM, a web service that helps you securely control access to AWS resources for your users. You use IAM to control who can use your AWS resources (authentication) and how they can use resources (authorization).
Creating an IAM User in Your AWS Account You can create one or more IAM users in your AWS account. You might create an IAM user when someone joins your organization, or when you have a new application that needs to make API calls to AWS.
Managing Access Keys for IAM Users Users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. To fill this need, you can create, modify, view, or rotate access keys (access key IDs and secret access keys) for IAM users.
IAM JSON Policy Elements Reference Learn more about the elements that you can use when you create a JSON policy. View additional JSON policy examples and learn about conditions, supported data types, and how they are used in various services.
IAM Best Practices To help secure your AWS resources, follow these best practices for IAM.
Using Multi-Factor Authentication (MFA) in AWS For an additional layer of security when signing in to your AWS account, AWS recommends that you configure MFA to help protect your AWS resources. MFA adds extra security because it requires users to enter a unique authentication code from an approved authentication device when they access AWS websites or services.
The IAM Console and the Sign-in Page Learn about the IAM-enabled AWS Management Console sign-in page and how to sign in as an AWS account root user or as an IAM user. To help your users sign in easily, create a unique sign-in URL for your account.
How Users Sign In to Your Account After you create IAM users and passwords for each, your users can sign in to the AWS Management Console using your account ID or alias, or from a special URL that includes your account ID.
Working with Server Certificates Some AWS services can use server certificates that you manage with IAM or AWS Certificate Manager (ACM). ACM is the preferred tool to provision, manage, and deploy your server certificates. Use IAM as a certificate manager only when you must support HTTPS connections in a region that is not supported by ACM.
IAM Roles A role is an AWS identity with permission policies that determine what the identity can and cannot do in AWS using temporary security credentials that are created dynamically and provided to the user. A role is intended to be assumable by anyone who needs it using these temporary security credentials.
IAM Policies Read an overview of policies, which are entities in AWS that, when attached to an identity or resource, define their permissions. Policies are stored in AWS as JSON documents attached to principals as identity-based policies or to resources as resource-based policies.
Example Policies This collection of policies can help you define permissions for your IAM identities, such as granting access to a specific Amazon DynamoDB table or launching Amazon EC2 instances in a specific subnet.
Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances Use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you use a role, you do not have to distribute long-term credentials to an EC2 instance. Instead, the role supplies temporary permissions that applications can use when they make calls to other AWS resources.
Creating Your First IAM Admin User and Group As a best practice, do not use the AWS account root user for any task where it’s not required. Instead, learn how to create an IAM administrator user and group for yourself.
Temporary Security Credentials You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use.
The AWS Account Root User When you first create an AWS account, you begin with a single sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account. To manage your root user, follow the steps on this page.
In the “Comments” section below, let us know if you would like to see anything on these or other IAM documentation pages expanded or updated to make them more useful to you.
I’m getting ready to wrap up my work for the year, cleaning up my inbox and catching up on a few recent AWS launches that happened at and shortly after AWS re:Invent.
Last week we launched Amazon Linux 2. This is modern version of Linux, designed to meet the security, stability, and productivity needs of enterprise environments while giving you timely access to new tools and features. It also includes all of the things that made the Amazon Linux AMI popular, including AWS integration, cloud-init, a secure default configuration, regular security updates, and AWS Support. From that base, we have added many new features including:
Long-Term Support – You can use Amazon Linux 2 in situations where you want to stick with a single major version of Linux for an extended period of time, perhaps to avoid re-qualifying your applications too frequently. This build (2017.12) is a candidate for LTS status; the final determination will be made based on feedback in the Amazon Linux Discussion Forum. Long-term support for the Amazon Linux 2 LTS build will include security updates, bug fixes, user-space Application Binary Interface (ABI), and user-space Application Programming Interface (API) compatibility for 5 years.
Extras Library – You can now get fast access to fresh, new functionality while keeping your base OS image stable and lightweight. The Amazon Linux Extras Library eliminates the age-old tradeoff between OS stability and access to fresh software. It contains open source databases, languages, and more, each packaged together with any needed dependencies.
Tuned Kernel – You have access to the latest 4.9 LTS kernel, with support for the latest EC2 features and tuned to run efficiently in AWS and other virtualized environments.
Systemd – Amazon Linux 2 includes the systemd init system, designed to provide better boot performance and increased control over individual services and groups of interdependent services. For example, you can indicate that Service B must be started only after Service A is fully started, or that Service C should start on a change in network connection status.
Wide Availabilty – Amazon Linux 2 is available in all AWS Regions in AMI and Docker image form. Virtual machine images for Hyper-V, KVM, VirtualBox, and VMware are also available. You can build and test your applications on your laptop or in your own data center and then deploy them to AWS.
I’m interested in the Extras Library; here’s how I see which topics (lists of packages) are available:
As you can see, the library includes languages, editors, and web tools that receive frequent updates. Each topic contains all of dependencies that are needed to install the package on Amazon Linux 2. For example, the Rust topic includes the cmake build system for Rust, cargo for Rust package maintenance, and the LLVM-based compiler toolchain for Rust.
SNS Updates Many AWS customers use the Amazon Linux AMIs as a starting point for their own AMIs. If you do this and would like to kick off your build process whenever a new AMI is released, you can subscribe to an SNS topic:
You can be notified by email, invoke a AWS Lambda function, and so forth.
This post was developed and written by Jeremy Cowan, Thomas Fuller, Samuel Karp, and Akram Chetibi.
—
Containers have revolutionized the way that developers build, package, deploy, and run applications. Initially, containers only supported code and tooling for Linux applications. With the release of Docker Engine for Windows Server 2016, Windows developers have started to realize the gains that their Linux counterparts have experienced for the last several years.
This week, we’re adding support for running production workloads in Windows containers using Amazon Elastic Container Service (Amazon ECS). Now, Amazon ECS provides an ECS-Optimized Windows Server Amazon Machine Image (AMI). This AMI is based on the EC2 Windows Server 2016 AMI, and includes Docker 17.06 Enterprise Edition and the ECS Agent 1.16. This AMI provides improved instance and container launch time performance. It’s based on Windows Server 2016 Datacenter and includes Docker 17.06.2-ee-5, along with a new version of the ECS agent that now runs as a native Windows service.
In this post, I discuss the benefits of this new support, and walk you through getting started running Windows containers with Amazon ECS.
When AWS released the Windows Server 2016 Base with Containers AMI, the ECS agent ran as a process that made it difficult to monitor and manage. As a service, the agent can be health-checked, managed, and restarted no differently than other Windows services. The AMI also includes pre-cached images for Windows Server Core 2016 and Windows Server Nano Server 2016. By caching the images in the AMI, launching new Windows containers is significantly faster. When Docker images include a layer that’s already cached on the instance, Docker re-uses that layer instead of pulling it from the Docker registry.
The ECS agent and an accompanying ECS PowerShell module used to install, configure, and run the agent come pre-installed on the AMI. This guarantees there is a specific platform version available on the container instance at launch. Because the software is included, you don’t have to download it from the internet. This saves startup time.
The Windows-compatible ECS-optimized AMI also reports CPU and memory utilization and reservation metrics to Amazon CloudWatch. Using the CloudWatch integration with ECS, you can create alarms that trigger dynamic scaling events to automatically add or remove capacity to your EC2 instances and ECS tasks.
Getting started
To help you get started running Windows containers on ECS, I’ve forked the ECS reference architecture, to build an ECS cluster comprised of Windows instances instead of Linux instances. You can pull the latest version of the reference architecture for Windows.
The reference architecture is a layered CloudFormation stack, in that it calls other stacks to create the environment. Within the stack, the ecs-windows-cluster.yaml file contains the instructions for bootstrapping the Windows instances and configuring the ECS cluster. To configure the instances outside of AWS CloudFormation (for example, through the CLI or the console), you can add the following commands to your instance’s user data:
If you don’t specify a cluster name when you initialize the agent, the instance is joined to the default cluster.
Adding -EnableIAMTaskRole when initializing the agent adds support for IAM roles for tasks. Previously, enabling this setting meant running a complex script and setting an environment variable before you could assign roles to your ECS tasks.
When you enable IAM roles for tasks on Windows, it consumes port 80 on the host. If you have tasks that listen on port 80 on the host, I recommend configuring a service for them that uses load balancing. You can use port 80 on the load balancer, and the traffic can be routed to another host port on your container instances. For more information, see Service Load Balancing.
Create a cluster
To create a new ECS cluster, choose Launch stack, or pull the GitHub project to your local machine and run the following command:
aws cloudformation create-stack –template-body file://<path to master-windows.yaml> --stack-name <name>
Upload your container image
Now that you have a cluster running, step through how to build and push an image into a container repository. You use a repository hosted in Amazon Elastic Container Registry (Amazon ECR) for this, but you could also use Docker Hub. To build and push an image to a repository, install Docker on your Windows* workstation. You also create a repository and assign the necessary permissions to the account that pushes your image to Amazon ECR. For detailed instructions, see Pushing an Image.
* If you are building an image that is based on Windows layers, then you must use a Windows environment to build and push your image to the registry.
Write your task definition
Now that your image is built and ready, the next step is to run your Windows containers using a task.
Start by creating a new task definition based on the windows-simple-iis image from Docker Hub.
You can now go back into the Task Definition page and see windows-simple-iis as an available task definition.
There are a few important aspects of the task definition file to note when working with Windows containers. First, the hostPort is configured as 8080, which is necessary because the ECS agent currently uses port 80 to enable IAM roles for tasks required for least-privilege security configurations.
There are also some fairly standard task parameters that are intentionally not included. For example, network mode is not available with Windows at the time of this release, so keep that setting blank to allow Docker to configure WinNAT, the only option available today.
Also, some parameters work differently with Windows than they do with Linux. The CPU limits that you define in the task definition are absolute, whereas on Linux they are weights. For information about other task parameters that are supported or possibly different with Windows, see the documentation.
Run your containers
At this point, you are ready to run containers. There are two options to run containers with ECS:
Task
Service
A task is typically a short-lived process that ECS creates. It can’t be configured to actively monitor or scale. A service is meant for longer-running containers and can be configured to use a load balancer, minimum/maximum capacity settings, and a number of other knobs and switches to help ensure that your code keeps running. In both cases, you are able to pick a placement strategy and a specific IAM role for your container.
Select the task definition that you created above and choose Action, Run Task.
Leave the settings on the next page to the default values.
Select the ECS cluster created when you ran the CloudFormation template.
Choose Run Task to start the process of scheduling a Docker container on your ECS cluster.
You can now go to the cluster and watch the status of your task. It may take 5–10 minutes for the task to go from PENDING to RUNNING, mostly because it takes time to download all of the layers necessary to run the microsoft/iis image. After the status is RUNNING, you should see the following results:
You may have noticed that the example task definition is named windows-simple-iis:2. This is because I created a second version of the task definition, which is one of the powerful capabilities of using ECS. You can make the task definitions part of your source code and then version them. You can also roll out new versions and practice blue/green deployment, switching to reduce downtime and improve the velocity of your deployments!
After the task has moved to RUNNING, you can see your website hosted in ECS. Find the public IP or DNS for your ECS host. Remember that you are hosting on port 8080. Make sure that the security group allows ingress from your client IP address to that port and that your VPC has an internet gateway associated with it. You should see a page that looks like the following:
This is a nice start to deploying a simple single instance task, but what if you had a Web API to be scaled out and in based on usage? This is where you could look at defining a service and collecting CloudWatch data to add and remove both instances of the task. You could also use CloudWatch alarms to add more ECS container instances and keep up with the demand. The former is built into the configuration of your service.
Select the task definition and choose Create Service.
Associate a load balancer.
Set up Auto Scaling.
The following screenshot shows an example where you would add an additional task instance when the CPU Utilization CloudWatch metric is over 60% on average over three consecutive measurements. This may not be aggressive enough for your requirements; it’s meant to show you the option to scale tasks the same way you scale ECS instances with an Auto Scaling group. The difference is that these tasks start much faster because all of the base layers are already on the ECS host.
This is just scratching the surface of the flexibility that you get from using containers and Amazon ECS. For more information, see the Amazon ECS Developer Guide and ECS Resources.
Today we’re launching Amazon Time Sync Service, a time synchronization service delivered over Network Time Protocol (NTP) which uses a fleet of redundant satellite-connected and atomic clocks in each region to deliver a highly accurate reference clock. This service is provided at no additional charge and is immediately available in all public AWS regions to all instances running in a VPC.
You can access the service via the link local 169.254.169.123 IP address. This means you don’t need to configure external internet access and the service can be securely accessed from within your private subnets.
Setup
Chrony is a different implementation of NTP than what ntpd uses and it’s able to synchronize the system clock faster and with better accuracy than ntpd. I’d recommend using Chrony unless you have a legacy reason to use ntpd.
Installing and configuring chrony on Amazon Linux is as simple as:
Alternatively, just modify your existing NTP config by adding the line server 169.254.169.123 prefer iburst.
On Windows you can run the following commands in PowerShell or a command prompt:
net stop w32time
w32tm /config /syncfromflags:manual /manualpeerlist:"169.254.169.123"
w32tm /config /reliable:yes
net start w32time
Leap Seconds
Time is hard. Science, and society, measure time with respect to the International Celestial Reference Frame (ICRF), which is computed using long baseline interferometry of distant quasars, GPS satellite orbits, and laser ranging of the moon (cool!). Irregularities in Earth’s rate of rotation cause UTC to drift from time with respect to the ICRF. To address this clock drift the International Earth Rotation and Reference Systems (IERS) occasionally introduce an extra second into UTC to keep it within 0.9 seconds of real time.
Leap seconds are known to cause application errors and this can be a concern for many savvy developers and systems administrators. The 169.254.169.123 clock smooths out leap seconds some period of time (commonly called leap smearing) which makes it easy for your applications to deal with leap seconds.
This timely update should provide immediate benefits to anyone previously relying on an external time synchronization service.
AWS Systems Manager is a new way to manage your cloud and hybrid IT environments. AWS Systems Manager provides a unified user interface that simplifies resource and application management, shortens the time to detect and resolve operational problems, and makes it easy to operate and manage your infrastructure securely at scale. This service is absolutely packed full of features. It defines a new experience around grouping, visualizing, and reacting to problems using features from products like Amazon EC2 Systems Manager (SSM) to enable rich operations across your resources.
As I said above, there are a lot of powerful features in this service and we won’t be able to dive deep on all of them but it’s easy to go to the console and get started with any of the tools.
Resource Groupings
Resource Groups allow you to create logical groupings of most resources that support tagging like: Amazon Elastic Compute Cloud (EC2) instances, Amazon Simple Storage Service (S3) buckets, Elastic Load Balancing balancers, Amazon Relational Database Service (RDS) instances, Amazon Virtual Private Cloud, Amazon Kinesis streams, Amazon Route 53 zones, and more. Previously, you could use the AWS Console to define resource groupings but AWS Systems Manager provides this new resource group experience via a new console and API. These groupings are a fundamental building block of Systems Manager in that they are frequently the target of various operations you may want to perform like: compliance management, software inventories, patching, and other automations.
You start by defining a group based on tag filters. From there you can view all of the resources in a centralized console. You would typically use these groupings to differentiate between applications, application layers, and environments like production or dev – but you can make your own rules about how to use them as well. If you imagine a typical 3 tier web-app you might have a few EC2 instances, an ELB, a few S3 buckets, and an RDS instance. You can define a grouping for that application and with all of those different resources simultaneously.
Insights
AWS Systems Manager automatically aggregates and displays operational data for each resource group through a dashboard. You no longer need to navigate through multiple AWS consoles to view all of your operational data. You can easily integrate your exiting Amazon CloudWatch dashboards, AWS Config rules, AWS CloudTrail trails, AWS Trusted Advisor notifications, and AWS Personal Health Dashboard performance and availability alerts. You can also easily view your software inventories across your fleet. AWS Systems Manager also provides a compliance dashboard allowing you to see the state of various security controls and patching operations across your fleets.
Acting on Insights
Building on the success of EC2 Systems Manager (SSM), AWS Systems Manager takes all of the features of SSM and provides a central place to access them. These are all the same experiences you would have through SSM with a more accesible console and centralized interface. You can use the resource groups you’ve defined in Systems Manager to visualize and act on groups of resources.
Automation
Automations allow you to define common IT tasks as a JSON document that specify a list of tasks. You can also use community published documents. These documents can be executed through the Console, CLIs, SDKs, scheduled maintenance windows, or triggered based on changes in your infrastructure through CloudWatch events. You can track and log the execution of each step in the documents and prompt for additional approvals. It also allows you to incrementally roll out changes and automatically halt when errors occur. You can start executing an automation directly on a resource group and it will be able to apply itself to the resources that it understands within the group.
Run Command
Run Command is a superior alternative to enabling SSH on your instances. It provides safe, secure remote management of your instances at scale without logging into your servers, replacing the need for SSH bastions or remote powershell. It has granular IAM permissions that allow you to restrict which roles or users can run certain commands.
Patch Manager, Maintenance Windows, and State Manager
I’ve written about Patch Manager before and if you manage fleets of Windows and Linux instances it’s a great way to maintain a common baseline of security across your fleet.
Maintenance windows allow you to schedule instance maintenance and other disruptive tasks for a specific time window.
State Manager allows you to control various server configuration details like anti-virus definitions, firewall settings, and more. You can define policies in the console or run existing scripts, PowerShell modules, or even Ansible playbooks directly from S3 or GitHub. You can query State Manager at any time to view the status of your instance configurations.
Things To Know
There’s some interesting terminology here. We haven’t done the best job of naming things in the past so let’s take a moment to clarify. EC2 Systems Manager (sometimes called SSM) is what you used before today. You can still invoke aws ssm commands. However, AWS Systems Manager builds on and enhances many of the tools provided by EC2 Systems Manager and allows those same tools to be applied to more than just EC2. When you see the phrase “Systems Manager” in the future you should think of AWS Systems Manager and not EC2 Systems Manager.
AWS Systems Manager with all of this useful functionality is provided at no additional charge. It is immediately available in all public AWS regions.
The best part about these services is that even with their tight integrations each one is designed to be used in isolation as well. If you only need one component of these services it’s simple to get started with only that component.
There’s a lot more than I could ever document in this post so I encourage you all to jump into the console and documentation to figure out where you can start using AWS Systems Manager.
Companies using .NET applications to access sensitive user information, such as employee salary, Social Security Number, and credit card information, need an easy and secure way to manage access for users and applications.
For example, let’s say that your company has a .NET payroll application. You want your Human Resources (HR) team to manage and update the payroll data for all the employees in your company. You also want your employees to be able to see their own payroll information in the application. To meet these requirements in a user-friendly and secure way, you want to manage access to the .NET application by using your existing Microsoft Active Directory identities. This enables you to provide users with single sign-on (SSO) access to the .NET application and to manage permissions using Active Directory groups. You also want the .NET application to authenticate itself to access the database, and to limit access to the data in the database based on the identity of the application user.
In this blog post, I give an overview of how to use AWS Managed Microsoft AD to manage gMSAs and KCD and demonstrate how you can configure a gMSA and KCD in six steps for a .NET application:
Create your AWS Managed Microsoft AD.
Create your Amazon RDS for SQL Server database.
Create a gMSA for your .NET application.
Deploy your .NET application.
Configure your .NET application to use the gMSA.
Configure KCD for your .NET application.
Solution overview
The following diagram shows the components of a .NET application that uses Amazon RDS for SQL Server with a gMSA and KCD. The diagram also illustrates authentication and access and is numbered to show the six key steps required to use a gMSA and KCD. To deploy this solution, the AWS Managed Microsoft AD directory must be in the same Amazon Virtual Private Cloud (VPC) as RDS for SQL Server. For this example, my company name is Example Corp., and my directory uses the domain name, example.com.
Deploy the solution
The following six steps (numbered to correlate with the preceding diagram) walk you through configuring and using a gMSA and KCD.
Using the RDS console, create your Amazon RDS for SQL Server database instance in the same Amazon VPC where your directory is running, and enable Windows Authentication. To enable Windows Authentication, select your directory in the Microsoft SQL Server Windows Authentication section in the Configure Advanced Settings step of the database creation workflow (see the following screenshot).
In my example, I create my Amazon RDS for SQL Server db-example database, and enable Windows Authentication to allow my db-example database to authenticate against my example.com directory.
3. Create a gMSA for your .NET application
Now that you have deployed your directory, database, and application, you can create a gMSA for your .NET application.
Log on to the instance on which you installed the Active Directory administration tools by using a user that is a member of the Admins security group or the Managed Service Accounts Admins security group in your organizational unit (OU). For my example, I use the Admin user in the example OU.
Identify which .NET application servers (hosts) will run your .NET application. Create a new security group in your OU and add your .NET application servers as members of this new group. This allows a group of application servers to use a single gMSA, instead of creating one gMSA for each server. In my example, I create a group, App_server_grp, in my example OU. I also add Appserver1, which is my .NET application server computer name, as a member of this new group.
Create a gMSA in your directory by running Windows PowerShell from the Start menu. The basic syntax to create the gMSA at the Windows PowerShell command prompt follows.
In my example, the gMSAname is gMSAexample, the DNSHostName is example.com, and the PrincipalsAllowedToRetrieveManagedPassword is the recently created security group, App_server_grp.
You also can confirm you created the gMSA by opening the Active Directory Users and Computers utility located in your Administrative Tools folder, expand the domain (example.com in my case), and expand the Managed Service Accounts folder.
5. Configure your .NET application to use the gMSA
You can configure your .NET application to use the gMSA to enforce strong password security policy and ensure password rotation of your service account. This helps to improve the security and simplify the management of your .NET application. Configure your .NET application in two steps:
Grant to gMSA the required permissions to run your .NET application in the respective application folders. This is a critical step because when you change the application pool identity account to use gMSA, downtime can occur if the gMSA does not have the application’s required permissions. Therefore, make sure you first test the configurations in your development and test environments.
Configure your application pool identity on IIS to use the gMSA as the service account. When you configure a gMSA as the service account, you include the $ at the end of the gMSA name. You do not need to provide a password because AWS Managed Microsoft AD automatically creates and rotates the password. In my example, my service account is gMSAexample$, as shown in the following screenshot.
You have completed all the steps to use gMSA to create and rotate your .NET application service account password! Now, you will configure KCD for your .NET application.
6. Configure KCD for your .NET application
You now are ready to allow your .NET application to have access to other services by using the user identity’s permissions instead of the application service account’s permissions. Note that KCD and gMSA are independent features, which means you do not have to create a gMSA to use KCD. For this example, I am using both features to show how you can use them together. To configure a regular service account such as a user or local built-in account, see the Kerberos constrained delegation with ASP.NET blog post on MSDN.
In my example, my goal is to delegate to the gMSAexample account the ability to enforce the user’s permissions to my db-example SQL Server database, instead of the gMSAexample account’s permissions. For this, I have to update the msDS-AllowedToDelegateTo gMSA attribute. The value for this attribute is the service principal name (SPN) of the service instance that you are targeting, which in this case is the db-example Amazon RDS for SQL Server database.
The SPN format for the msDS-AllowedToDelegateTo attribute is a combination of the service class, the Kerberos authentication endpoint, and the port number. The Amazon RDS for SQL Server Kerberos authentication endpoint format is [database_name].[domain_name]. The value for my msDS-AllowedToDelegateTo attribute is MSSQLSvc/db-example.example.com:1433, where MSSQLSvc and 1433 are the SQL Server Database service class and port number standards, respectively.
Follow these steps to perform the msDS-AllowedToDelegateTo gMSA attribute configuration:
Log on to your Active Directory management instance with a user identity that is a member of the Kerberos Delegation Admins security group. In this case, I will use admin.
Open the Active Directory Users and Groups utility located in your Administrative Tools folder, choose View, and then choose Advanced Features.
Expand your domain name (example.com in this example), and then choose the Managed Service Accounts security group. Right-click the gMSA account for the application pool you want to enable for Kerberos delegation, choose Properties, and choose the Attribute Editor tab.
Search for the msDS-AllowedToDelegateTo attribute on the Attribute Editor tab and choose Edit.
Enter the MSSQLSvc/db-example.example.com:1433 value and choose Add.
Choose OK and Apply, and your KCD configuration is complete.
Congratulations! At this point, your application is using a gMSA rather than an embedded static user identity and password, and the application is able to access SQL Server using the identity of the application user. The gMSA eliminates the need for you to rotate the application’s password manually, and it allows you to better scope permissions for the application. When you use KCD, you can enforce access to your database consistently based on user identities at the database level, which prevents improper access that might otherwise occur because of an application error.
Summary
In this blog post, I demonstrated how to simplify the deployment and improve the security of your .NET application by using a group Managed Service Account and Kerberos constrained delegation with your AWS Managed Microsoft AD directory. I also outlined the main steps to get your .NET environment up and running on a managed Active Directory and SQL Server infrastructure. This approach will make it easier for you to build new .NET applications in the AWS Cloud or migrate existing ones in a more secure way.
For additional information about using group Managed Service Accounts and Kerberos constrained delegation with your AWS Managed Microsoft AD directory, see the AWS Directory Service documentation.
You can now encrypt and decrypt your data at the command line and in scripts—no cryptography or programming expertise is required. The new AWS Encryption SDK Command Line Interface (AWS Encryption CLI) brings the AWS Encryption SDK to the command line.
With the AWS Encryption CLI, you can take advantage of the advanced data protection built into the AWS Encryption SDK, including envelope encryption and strong algorithm suites, such as 256-bit AES-GCM with HKDF. The AWS Encryption CLI supports best-practice features, such as authenticated encryption with symmetric encryption keys and asymmetric signing keys, as well as unique data keys for each encryption operation. You can use the CLI with customer master keys (CMKs) from AWS Key Management Service (AWS KMS), master keys that you manage in AWS CloudHSM, or master keys from your own custom master key provider, but the AWS Encryption CLI does not require any AWS service.
The AWS Encryption CLI is built on the AWS Encryption SDK for Python and is fully interoperable with all language-specific implementations of the AWS Encryption SDK. It is supported on Linux, macOS, and Windows platforms. You can encrypt and decrypt your data in a shell on Linux and macOS, in a Command Prompt window (cmd.exe) on Windows, or in a PowerShell console on any system.
Let’s use the AWS Encryption CLI to encrypt a file called secret.txt in your current directory. I will write the file of encrypted output to the same directory. This secret.txt file contains a Hello World string, but it might contain data that is critical to your business.
$ ls
secret.txt
$ cat secret.txt
Hello World
I’m using a Linux shell, but you can run similar commands in a macOS shell, a Command Prompt window, or a PowerShell console.
When you encrypt data, you specify a master key. This example uses an AWS KMS CMK, but you can use a master key from any master key provider that is compatible with the AWS Encryption SDK. The AWS Encryption CLI uses the master key to generate a unique data key for each file that it encrypts.
If you use an AWS KMS CMK as your master key, you need to install and configure the AWS Command Line Interface (AWS CLI) so that the credentials you use to authenticate to AWS KMS are available to the AWS Encryption CLI. Those credentials must give you permission to call the AWS KMS GenerateDataKey and Decrypt APIs on the CMK.
The first line of this example saves an AWS KMS CMK ID in the $keyID variable. The second line encrypts the data in the secret.txt file. (The backslash, “\”, is the line continuation character in Linux shells.)
To run the following command, substitute a valid CMK identifier for the placeholder value in the command.
This command uses the --encrypt(-e) parameter to specify the encryption action and the --master-keys (-m) parameter with a key attribute to specify an AWS KMS CMK. If you’re not using an AWS KMS CMK, you need to include a provider attribute that identifies the master key provider.
The command uses the --encryption-context parameter (-c) to specify an encryption context, purpose=test, for the operation. The encryption context is non-secret data that is cryptographically bound to the encrypted data and included in plaintext in the encrypted message that the CLI returns. Providing additional authenticated data, such as an encryption context, is a recommended best practice.
The --metadata-output parameter tells the AWS Encryption CLI where to write the metadata for the encrypt command. The metadata includes the full paths to the input and output files, the encryption context, the algorithm suite, and other valuable information that you can use to review the operation and verify that it meets your security standards.
The --input (-i) and --output (-o) parameters are required in every AWS Encryption CLI command. In this example, the input file is the secret.txt file. The output location is the current directory, which is represented by a dot (“.”).
When the --encrypt command is successful, it creates a new file that contains the encrypted data, but it does not return any output. To see the results of the command, use a directory listing command, such as ls or dir. Running an ls command in this example shows that the AWS Encryption CLI generated the secret.txt.encrypted file.
$ ls
secret.txt secret.txt.encrypted
By default, the output file that the --encrypt command creates has the same name as the input file, plus a .encrypted suffix. You can use the --suffix parameter to specify a custom suffix.
The secret.txt.encrypted file contains a single, portable, secure encrypted message. The encrypted message includes the encrypted data, an encrypted copy of the data key that encrypted the data, and metadata, including the plaintext encryption context that I provided.
You can manage an encrypted file in any way that you choose, including copying it to an Amazon S3 bucket or archiving it for later use.
Decrypt a file
Now, let’s use the AWS Encryption CLI to decrypt the secret.txt.encrypted file. If you have the required permissions on your master key, you can use any version of the AWS Encryption SDK to decrypt a file that the AWS Encryption CLI encrypted, including the AWS Encryption SDK libraries in Java and Python.
The --decrypt command requires an encrypted message, like the one that the --encrypt command returned, and both --input and --output parameters.
This command has no --master-keys parameter. A --master-keys parameter is required only if you’re not using an AWS KMS CMK.
In this example command, the --input parameter specifies the secret.txt.encrypted file. The --output parameter specifies the current directory, which again is represented by a dot (“.”).
The --encryption-context parameter supplies the same encryption context that was used in the encrypt command. This parameter is not required, but verifying the encryption context during decryption is a cryptographic best practice.
The --metatdata-output parameter tells the command where to write the metadata for the decrypt command. If the file exists, this parameter appends the metadata to the existing file. The AWS Encryption CLI also has parameters that overwrite the metadata file or suppress the metadata.
When it is successful, the decrypt command generates the file of decrypted (plaintext) data, but it does not return any output. To see the results of the decryption command, use a command that gets the content of the file, such as cat or Get-Content.
$ ls
secret.txt secret.txt.encrypted secret.txt.encrypted.decrypted
$ cat secret.txt.encrypted.decrypted
Hello World
The output file that the --decrypt command created has the same name as the input file, plus a .decrypted suffix. The --suffix parameter works on --decrypt commands, too.
Encrypt directories and more
In addition to encrypting and decrypting a single file, you can use the AWS Encryption CLI to encrypt and decrypt strings that you pipe to the CLI, and all or selected files in a directory and its subdirectories, or local or remote volumes. We have examples for you to try in the AWS Encryption SDK documentation.
Contributed by Tiffany Jernigan, Developer Advocate for Amazon ECS
Get ready for takeoff!
We made sure that this year’s re:Invent is chock-full of containers: there are over 40 sessions! New to containers? No problem, we have several introductory sessions for you to dip your toes. Been using containers for years and know the ins and outs? Don’t miss our technical deep-dives and interactive chalk talks led by container experts.
If you can’t make it to Las Vegas, you can catch the keynotes and session recaps from our livestream and on Twitch.
Session types
Not everyone learns the same way, so we have multiple types of breakout content:
Birds of a Feather An interactive discussion with industry leaders about containers on AWS.
Breakout sessions 60-minute presentations about building on AWS. Sessions are delivered by both AWS experts and customers and span all content levels.
Workshops 2.5-hour, hands-on sessions that teach how to build on AWS. AWS credits are provided. Bring a laptop, and have an active AWS account.
Chalk Talks 1-hour, highly interactive sessions with a smaller audience. They begin with a short lecture delivered by an AWS expert, followed by a discussion with the audience.
Session levels
Whether you’re new to containers or you’ve been using them for years, you’ll find useful information at every level.
Introductory Sessions are focused on providing an overview of AWS services and features, with the assumption that attendees are new to the topic.
Advanced Sessions dive deeper into the selected topic. Presenters assume that the audience has some familiarity with the topic, but may or may not have direct experience implementing a similar solution.
Expert Sessions are for attendees who are deeply familiar with the topic, have implemented a solution on their own already, and are comfortable with how the technology works across multiple services, architectures, and implementations.
Session locations
All container sessions are located in the Aria Resort.
MONDAY 11/27
Breakout sessions
Level 200 (Introductory)
CON202 – Getting Started with Docker and Amazon ECS By packaging software into standardized units, Docker gives code everything it needs to run, ensuring consistency from your laptop all the way into production. But once you have your code ready to ship, how do you run and scale it in the cloud? In this session, you become comfortable running containerized services in production using Amazon ECS. We cover container deployment, cluster management, service auto-scaling, service discovery, secrets management, logging, monitoring, security, and other core concepts. We also cover integrated AWS services and supplementary services that you can take advantage of to run and scale container-based services in the cloud.
Chalk talks
Level 200 (Introductory)
CON211 – Reducing your Compute Footprint with Containers and Amazon ECS Tomas Riha, platform architect for Volvo, shows how Volvo transitioned its WirelessCar platform from using Amazon EC2 virtual machines to containers running on Amazon ECS, significantly reducing cost. Tomas dives deep into the architecture that Volvo used to achieve the migration in under four months, including Amazon ECS, Amazon ECR, Elastic Load Balancing, and AWS CloudFormation.
CON212 – Anomaly Detection Using Amazon ECS, AWS Lambda, and Amazon EMR Learn about the architecture that Cisco CloudLock uses to enable automated security and compliance checks throughout the entire development lifecycle, from the first line of code through runtime. It includes integration with IAM roles, Amazon VPC, and AWS KMS.
Level 400 (Expert)
CON410 – Advanced CICD with Amazon ECS Control Plane Mohit Gupta, product and engineering lead for Clever, demonstrates how to extend the Amazon ECS control plane to optimize management of container deployments and how the control plane can be broadly applied to take advantage of new AWS services. This includes ark—an AWS CLI-based deployment to Amazon ECS, Dapple—a slack-based automation system for deployments and notifications, and Kayvee—log and event routing libraries based on Amazon Kinesis.
Workshops
Level 200 (Introductory)
CON209 – Interstella 8888: Learn How to Use Docker on AWS Interstella 8888 is an intergalactic trading company that deals in rare resources, but their antiquated monolithic logistics systems are causing the business to lose money. Join this workshop to get hands-on experience with Docker as you containerize Interstella 8888’s aging monolithic application and deploy it using Amazon ECS.
CON213 – Hands-on Deployment of Kubernetes on AWS In this workshop, attendees get hands-on experience using Kubernetes and Kops (Kubernetes Operations), as described in our recent blog post. Attendees learn how to provision a cluster, assign role-based permissions and security, and launch a container. If you’re interested in learning best practices for running Kubernetes on AWS, don’t miss this workshop.
TUESDAY 11/28
Breakout Sessions
Level 200 (Introductory)
CON206 – Docker on AWS In this session, Docker Technical Staff Member Patrick Chanezon discusses how Finnish Rail, the national train system for Finland, is using Docker on Amazon Web Services to modernize their customer facing applications, from ticket sales to reservations. Patrick also shares the state of Docker development and adoption on AWS, including explaining the opportunities and implications of efforts such as Project Moby, Docker EE, and how developers can use and contribute to Docker projects.
CON208 – Building Microservices on AWS Increasingly, organizations are turning to microservices to help them empower autonomous teams, letting them innovate and ship software faster than ever before. But implementing a microservices architecture comes with a number of new challenges that need to be dealt with. Chief among these finding an appropriate platform to help manage a growing number of independently deployable services. In this session, Sam Newman, author of Building Microservices and a renowned expert in microservices strategy, discusses strategies for building scalable and robust microservices architectures. He also tells you how to choose the right platform for building microservices, and about common challenges and mistakes organizations make when they move to microservices architectures.
Level 300 (Advanced)
CON302 – Building a CICD Pipeline for Containers on AWS Containers can make it easier to scale applications in the cloud, but how do you set up your CICD workflow to automatically test and deploy code to containerized apps? In this session, we explore how developers can build effective CICD workflows to manage their containerized code deployments on AWS.
Ajit Zadgaonkar, Director of Engineering and Operations at Edmunds walks through best practices for CICD architectures used by his team to deploy containers. We also deep dive into topics such as how to create an accessible CICD platform and architect for safe blue/green deployments.
CON307 – Building Effective Container Images Sick of getting paged at 2am and wondering “where did all my disk space go?” New Docker users often start with a stock image in order to get up and running quickly, but this can cause problems as your application matures and scales. Creating efficient container images is important to maximize resources, and deliver critical security benefits.
In this session, AWS Sr. Technical Evangelist Abby Fuller covers how to create effective images to run containers in production. This includes an in-depth discussion of how Docker image layers work, things you should think about when creating your images, working with Amazon ECR, and mise-en-place for install dependencies. Prakash Janakiraman, Co-Founder and Chief Architect at Nextdoor discuss high-level and language-specific best practices for with building images and how Nextdoor uses these practices to successfully scale their containerized services with a small team.
CON309 – Containerized Machine Learning on AWS Image recognition is a field of deep learning that uses neural networks to recognize the subject and traits for a given image. In Japan, Cookpad uses Amazon ECS to run an image recognition platform on clusters of GPU-enabled EC2 instances. In this session, hear from Cookpad about the challenges they faced building and scaling this advanced, user-friendly service to ensure high-availability and low-latency for tens of millions of users.
CON320 – Monitoring, Logging, and Debugging for Containerized Services As containers become more embedded in the platform tools, debug tools, traces, and logs become increasingly important. Nare Hayrapetyan, Senior Software Engineer and Calvin French-Owen, Senior Technical Officer for Segment discuss the principals of monitoring and debugging containers and the tools Segment has implemented and built for logging, alerting, metric collection, and debugging of containerized services running on Amazon ECS.
Chalk Talks
Level 300 (Advanced)
CON314 – Automating Zero-Downtime Production Cluster Upgrades for Amazon ECS Containers make it easy to deploy new code into production to update the functionality of a service, but what happens when you need to update the Amazon EC2 compute instances that your containers are running on? In this talk, we’ll deep dive into how to upgrade the Amazon EC2 infrastructure underlying a live production Amazon ECS cluster without affecting service availability. Matt Callanan, Engineering Manager at Expedia walk through Expedia’s “PRISM” project that safely relocates hundreds of tasks onto new Amazon EC2 instances with zero-downtime to applications.
CON322 – Maximizing Amazon ECS for Large-Scale Workloads Head of Mobfox DevOps, David Spitzer, shows how Mobfox used Docker and Amazon ECS to scale the Mobfox services and development teams to achieve low-latency networking and automatic scaling. This session covers Mobfox’s ecosystem architecture. It compares 2015 and today, the challenges Mobfox faced in growing their platform, and how they overcame them.
CON323 – Microservices Architectures for the Enterprise Salva Jung, Principle Engineer for Samsung Mobile shares how Samsung Connect is architected as microservices running on Amazon ECS to securely, stably, and efficiently handle requests from millions of mobile and IoT devices around the world.
CON324 – Windows Containers on Amazon ECS Docker containers are commonly regarded as powerful and portable runtime environments for Linux code, but Docker also offers API and toolchain support for running Windows Servers in containers. In this talk, we discuss the various options for running windows-based applications in containers on AWS.
CON326 – Remote Sensing and Image Processing on AWS Learn how Encirca services by DuPont Pioneer uses Amazon ECS powered by GPU-instances and Amazon EC2 Spot Instances to run proprietary image-processing algorithms against satellite imagery. Mark Lanning and Ethan Harstad, engineers at DuPont Pioneer show how this architecture has allowed them to process satellite imagery multiple times a day for each agricultural field in the United States in order to identify crop health changes.
Workshops
Level 300 (Advanced)
CON317 – Advanced Container Management at Catsndogs.lol Catsndogs.lol is a (fictional) company that needs help deploying and scaling its container-based application. During this workshop, attendees join the new DevOps team at CatsnDogs.lol, and help the company to manage their applications using Amazon ECS, and help release new features to make our customers happier than ever.Attendees get hands-on with service and container-instance auto-scaling, spot-fleet integration, container placement strategies, service discovery, secrets management with AWS Systems Manager Parameter Store, time-based and event-based scheduling, and automated deployment pipelines. If you are a developer interested in learning more about how Amazon ECS can accelerate your application development and deployment workflows, or if you are a systems administrator or DevOps person interested in understanding how Amazon ECS can simplify the operational model associated with running containers at scale, then this workshop is for you. You should have basic familiarity with Amazon ECS, Amazon EC2, and IAM.
Additional requirements:
The AWS CLI or AWS Tools for PowerShell installed
An AWS account with administrative permissions (including the ability to create IAM roles and policies) created at least 24 hours in advance.
WEDNESDAY 11/29
Birds of a Feather (BoF)
CON01 – Birds of a Feather: Containers and Open Source at AWS Cloud native architectures take advantage of on-demand delivery, global deployment, elasticity, and higher-level services to enable developer productivity and business agility. Open source is a core part of making cloud native possible for everyone. In this session, we welcome thought leaders from the CNCF, Docker, and AWS to discuss the cloud’s direction for growth and enablement of the open source community. We also discuss how AWS is integrating open source code into its container services and its contributions to open source projects.
Breakout Sessions
Level 300 (Advanced)
CON308 – Mastering Kubernetes on AWS Much progress has been made on how to bootstrap a cluster since Kubernetes’ first commit and is now only a matter of minutes to go from zero to a running cluster on Amazon Web Services. However, evolving a simple Kubernetes architecture to be ready for production in a large enterprise can quickly become overwhelming with options for configuration and customization.
In this session, Arun Gupta, Open Source Strategist for AWS and Raffaele Di Fazio, software engineer at leading European fashion platform Zalando, show the common practices for running Kubernetes on AWS and share insights from experience in operating tens of Kubernetes clusters in production on AWS. We cover options and recommendations on how to install and manage clusters, configure high availability, perform rolling upgrades and handle disaster recovery, as well as continuous integration and deployment of applications, logging, and security.
CON310 – Moving to Containers: Building with Docker and Amazon ECS If you’ve ever considered moving part of your application stack to containers, don’t miss this session. We cover best practices for containerizing your code, implementing automated service scaling and monitoring, and setting up automated CI/CD pipelines with fail-safe deployments. Manjeeva Silva and Thilina Gunasinghe show how McDonalds implemented their home delivery platform in four months using Docker containers and Amazon ECS to serve tens of thousands of customers.
Level 400 (Expert)
CON402 – Advanced Patterns in Microservices Implementation with Amazon ECS Scaling a microservice-based infrastructure can be challenging in terms of both technical implementation and developer workflow. In this talk, AWS Solutions Architect Pierre Steckmeyer is joined by Will McCutchen, Architect at BuzzFeed, to discuss Amazon ECS as a platform for building a robust infrastructure for microservices. We look at the key attributes of microservice architectures and how Amazon ECS supports these requirements in production, from configuration to sophisticated workload scheduling to networking capabilities to resource optimization. We also examine what it takes to build an end-to-end platform on top of the wider AWS ecosystem, and what it’s like to migrate a large engineering organization from a monolithic approach to microservices.
CON404 – Deep Dive into Container Scheduling with Amazon ECS As your application’s infrastructure grows and scales, well-managed container scheduling is critical to ensuring high availability and resource optimization. In this session, we deep dive into the challenges and opportunities around container scheduling, as well as the different tools available within Amazon ECS and AWS to carry out efficient container scheduling. We discuss patterns for container scheduling available with Amazon ECS, the Blox scheduling framework, and how you can customize and integrate third-party scheduler frameworks to manage container scheduling on Amazon ECS.
Chalk Talks
Level 300 (Advanced)
CON312 – Building a Selenium Fleet on the Cheap with Amazon ECS with Spot Fleet Roberto Rivera and Matthew Wedgwood, engineers at RetailMeNot, give a practical overview of setting up a fleet of Selenium nodes running on Amazon ECS with Spot Fleet. Discuss the challenges of running Selenium with high availability at minimum cost using Amazon ECS container introspection to connect the Selenium Hub with its nodes.
CON315 – Virtually There: Building a Render Farm with Amazon ECS Learn how 8i Corp scales its multi-tenanted, volumetric render farm up to thousands of instances using AWS, Docker, and an API-driven infrastructure. This render farm enables them to turn the video footage from an array of synchronized cameras into a photo-realistic hologram capable of playback on a range of devices, from mobile phones to high-end head mounted displays. Join Owen Evans, VP of Engineering for 8i, as they dive deep into how 8i’s rendering infrastructure is built and maintained by just a handful of people and powered by Amazon ECS.
CON325 – Developing Microservices – from Your Laptop to the Cloud Wesley Chow, Staff Engineer at Adroll, shows how his team extends Amazon ECS by enabling local development capabilities. Hologram, Adroll’s local development program, brings the capabilities of the Amazon EC2 instance metadata service to non-EC2 hosts, so that developers can run the same software on local machines with the same credentials source as in production.
CON327 – Patterns and Considerations for Service Discovery Roven Drabo, head of cloud operations at Kaplan Test Prep, illustrates Kaplan’s complete container automation solution using Amazon ECS along with how his team uses NGINX and HashiCorp Consul to provide an automated approach to service discovery and container provisioning.
CON328 – Building a Development Platform on Amazon ECS Quinton Anderson, Head of Engineering for Commonwealth Bank of Australia, walks through how they migrated their internal development and deployment platform from Mesos/Marathon to Amazon ECS. The platform uses a custom DSL to abstract a layered application architecture, in a way that makes it easy to plug or replace new implementations into each layer in the stack.
Workshops
Level 300 (Advanced)
CON318 – Interstella 8888: Monolith to Microservices with Amazon ECS Interstella 8888 is an intergalactic trading company that deals in rare resources, but their antiquated monolithic logistics systems are causing the business to lose money. Join this workshop to get hands-on experience deploying Docker containers as you break Interstella 8888’s aging monolithic application into containerized microservices. Using Amazon ECS and an Application Load Balancer, you create API-based microservices and deploy them leveraging integrations with other AWS services.
CON332 – Build a Java Spring Application on Amazon ECS This workshop teaches you how to lift and shift existing Spring and Spring Cloud applications onto the AWS platform. Learn how to build a Spring application container, understand bootstrap secrets, push container images to Amazon ECR, and deploy the application to Amazon ECS. Then, learn how to configure the deployment for production.
THURSDAY 11/30
Breakout Sessions
Level 200 (Introductory)
CON201 – Containers on AWS – State of the Union Just over four years after the first public release of Docker, and three years to the day after the launch of Amazon ECS, the use of containers has surged to run a significant percentage of production workloads at startups and enterprise organizations. Join Deepak Singh, General Manager of Amazon Container Services, as he covers the state of containerized application development and deployment trends, new container capabilities on AWS that are available now, options for running containerized applications on AWS, and how AWS customers successfully run container workloads in production.
Level 300 (Advanced)
CON304 – Batch Processing with Containers on AWS Batch processing is useful to analyze large amounts of data. But configuring and scaling a cluster of virtual machines to process complex batch jobs can be difficult. In this talk, we show how to use containers on AWS for batch processing jobs that can scale quickly and cost-effectively. We also discuss AWS Batch, our fully managed batch-processing service. You also hear from GoPro and Here about how they use AWS to run batch processing jobs at scale including best practices for ensuring efficient scheduling, fine-grained monitoring, compute resource automatic scaling, and security for your batch jobs.
Level 400 (Expert)
CON406 – Architecting Container Infrastructure for Security and Compliance While organizations gain agility and scalability when they migrate to containers and microservices, they also benefit from compliance and security, advantages that are often overlooked. In this session, Kelvin Zhu, lead software engineer at Okta, joins Mitch Beaumont, enterprise solutions architect at AWS, to discuss security best practices for containerized infrastructure. Learn how Okta built their development workflow with an emphasis on security through testing and automation. Dive deep into how containers enable automated security and compliance checks throughout the development lifecycle. Also understand best practices for implementing AWS security and secrets management services for any containerized service architecture.
Chalk Talks
Level 300 (Advanced)
CON329 – Full Software Lifecycle Management for Containers Running on Amazon ECS Learn how The Washington Post uses Amazon ECS to run Arc Publishing, a digital journalism platform that powers The Washington Post and a growing number of major media websites. Amazon ECS enabled The Washington Post to containerize their existing microservices architecture, avoiding a complete rewrite that would have delayed the platform’s launch by several years. In this session, Jason Bartz, Technical Architect at The Washington Post, discusses the platform’s architecture. He addresses the challenges of optimizing Arc Publishing’s workload, and managing the application lifecycle to support 2,000 containers running on more than 50 Amazon ECS clusters.
CON330 – Running Containerized HIPAA Workloads on AWS Nihar Pasala, Engineer at Aetion, discusses the Aetion Evidence Platform, a system for generating the real-world evidence used by healthcare decision makers to implement value-based care. This session discusses the architecture Aetion uses to run HIPAA workloads using containers on Amazon ECS, best practices, and learnings.
Level 400 (Expert)
CON408 – Building a Machine Learning Platform Using Containers on AWS DeepLearni.ng develops and implements machine learning models for complex enterprise applications. In this session, Thomas Rogers, Engineer for DeepLearni.ng discusses how they worked with Scotiabank to leverage Amazon ECS, Amazon ECR, Docker, GPU-accelerated Amazon EC2 instances, and TensorFlow to develop a retail risk model that helps manage payment collections for millions of Canadian credit card customers.
Workshops
Level 300 (Advanced)
CON319 – Interstella 8888: CICD for Containers on AWS Interstella 8888 is an intergalactic trading company that deals in rare resources, but their antiquated monolithic logistics systems are causing the business to lose money. Join this workshop to learn how to set up a CI/CD pipeline for containerized microservices. You get hands-on experience deploying Docker container images using Amazon ECS, AWS CloudFormation, AWS CodeBuild, and AWS CodePipeline, automating everything from code check-in to production.
FRIDAY 12/1
Breakout Sessions
Level 400 (Expert)
CON405 – Moving to Amazon ECS – the Not-So-Obvious Benefits If you ask 10 teams why they migrated to containers, you will likely get answers like ‘developer productivity’, ‘cost reduction’, and ‘faster scaling’. But teams often find there are several other ‘hidden’ benefits to using containers for their services. In this talk, Franziska Schmidt, Platform Engineer at Mapbox and Yaniv Donenfeld from AWS will discuss the obvious, and not so obvious benefits of moving to containerized architecture. These include using Docker and Amazon ECS to achieve shared libraries for dev teams, separating private infrastructure from shareable code, and making it easier for non-ops engineers to run services.
Chalk Talks
Level 300 (Advanced)
CON331 – Deploying a Regulated Payments Application on Amazon ECS Travelex discusses how they built an FCA-compliant international payments service using a microservices architecture on AWS. This chalk talk covers the challenges of designing and operating an Amazon ECS-based PaaS in a regulated environment using a DevOps model.
Workshops
Level 400 (Expert)
CON407 – Interstella 8888: Advanced Microservice Operations Interstella 8888 is an intergalactic trading company that deals in rare resources, but their antiquated monolithic logistics systems are causing the business to lose money. In this workshop, you help Interstella 8888 build a modern microservices-based logistics system to save the company from financial ruin. We give you the hands-on experience you need to run microservices in the real world. This includes implementing advanced container scheduling and scaling to deal with variable service requests, implementing a service mesh, issue tracing with AWS X-Ray, container and instance-level logging with Amazon CloudWatch, and load testing.
Know before you go
Want to brush up on your container knowledge before re:Invent? Here are some helpful resources to get started:
Today we are adding support for Windows-based Virtual Private Servers. You can launch a VPS that runs Windows Server 2012 R2, Windows Server 2016, or Windows Server 2016 with SQL Server 2016 Express and be up and running in minutes. You can use your VPS to build, test, and deploy .NET or Windows applications without having to set up or run any infrastructure. Backups, DNS management, and operational metrics are all accessible with a click or two.
Servers are available in five sizes, with 512 MB to 8 GB of RAM, 1 or 2 vCPUs, and up to 80 GB of SSD storage. Prices (including software licenses) start at $10 per month:
You can try out a 512 MB server for one month (up to 750 hours) at no charge.
Launching a Windows VPS To launch a Windows VPS, log in to Lightsail , click on Create instance, and select the Microsoft Windows platform. Then click on Apps + OS if you want to run SQL Server 2016 Express, or OS Only if Windows is all you need:
If you want to use a Powershell script to customize your instance after it launches for the first time, click on Add launch script and enter the script:
Choose your instance plan, enter a name for your instance(s), and select the quantity to be launched, then click on Create:
Your instance will be up and running within a minute or so:
Click on the instance, and then click on Connect using RDP:
This will connect using a built-in, browser-based RDP client (you can also use the IP address and the credentials with another client):
Available Today This feature is available today in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (London), EU (Ireland), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions.
The following 20 pages have been the most viewed AWS Identity and Access Management (IAM) documentation pages so far this year. I have included a brief description with each link to explain what each page covers. Use this list to see what other AWS customers have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to learn about.
What Is IAM? Learn more about IAM, a web service that helps you securely control access to AWS resources for your users. You use IAM to control who can use your AWS resources (authentication) and how they can use resources (authorization).
Creating an IAM User in Your AWS Account You can create one or more IAM users in your AWS account. You might create an IAM user when someone joins your organization, or when you have a new application that needs to make API calls to AWS.
IAM Policy Elements Reference Learn more about the elements that you can use when you create a policy. View additional policy examples and learn about conditions, supported data types, and how they are used in various services.
Managing Access Keys for IAM Users Users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. To fill this need, you can create, modify, view, or rotate access keys (access key IDs and secret access keys) for IAM users.
IAM Best Practices To help secure your AWS resources, follow these best practices for IAM.
The IAM Console and the Sign-in Page Learn about the IAM-enabled AWS Management Console sign-in page and how to sign in as an AWS account root user or as an IAM user. To help your users sign in easily, create a unique sign-in URL for your account.
How Users Sign In to Your Account After you create IAM users and passwords for each, your users can sign in to the AWS Management Console for your AWS account using your account ID or alias, or from a special URL that includes your account ID.
Using Multi-Factor Authentication (MFA) in AWS For increased security, AWS recommends that you configure MFA to help protect your AWS resources. MFA adds extra security because it requires users to enter a unique authentication code from an approved authentication device or SMS text message when they access AWS websites or services.
Working with Server Certificates Some AWS services can use server certificates that you manage with IAM or AWS Certificate Manager (ACM). ACM is the preferred tool to provision, manage, and deploy your server certificates. Use IAM as a certificate manager only when you must support HTTPS connections in a region that is not supported by ACM.
IAM Roles You can delegate access to AWS resources using an IAM role. A role is similar to a user because it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it.
Example Policies This collection of policies can help you define permissions for your IAM identities.
Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances Use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you use a role, you do not have to distribute long-term credentials to an EC2 instance. Instead, the role supplies temporary permissions that applications can use when they make calls to other AWS resources.
Creating Your First IAM Admin User and Group Learn how to create an IAM group, grant the group full permissions for all AWS services, and then create an administrative IAM user for yourself by adding the user to the IAM group.
Using Instance Profiles An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts. Use the commands on this page to work with instance profiles in an AWS account
Temporary Security Credentials You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use.
In the “Comments” section below, let us know if you would like to see anything on these or other IAM documentation pages expanded or updated to make it more useful to you.
Starting today, you can encrypt the Lightweight Directory Access Protocol (LDAP) communications between your applications and AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD. Many Windows and Linux applications use Active Directory’s (AD) LDAP service to read and write sensitive information about users and devices, including personally identifiable information (PII). Now, you can encrypt your AWS Microsoft AD LDAP communications end to end to protect this information by using LDAP Over Secure Sockets Layer (SSL)/Transport Layer Security (TLS), also called LDAPS. This helps you protect PII and other sensitive information exchanged with AWS Microsoft AD over untrusted networks.
To enable LDAPS, you need to add a Microsoft enterprise Certificate Authority (CA) server to your AWS Microsoft AD domain and configure certificate templates for your domain controllers. After you have enabled LDAPS, AWS Microsoft AD encrypts communications with LDAPS-enabled Windows applications, Linux computers that use Secure Shell (SSH) authentication, and applications such as Jira and Jenkins.
In this blog post, I show how to enable LDAPS for your AWS Microsoft AD directory in six steps: 1) Delegate permissions to CA administrators, 2) Add a Microsoft enterprise CA to your AWS Microsoft AD directory, 3) Create a certificate template, 4) Configure AWS security group rules, 5) AWS Microsoft AD enables LDAPS, and 6) Test LDAPS access using the LDP tool.
Assumptions
For this post, I assume you are familiar with following:
Before going into specific deployment steps, I will provide a high-level overview of deploying LDAPS. I cover how you enable LDAPS on AWS Microsoft AD. In addition, I provide some general background about CA deployment models and explain how to apply these models when deploying Microsoft CA to enable LDAPS on AWS Microsoft AD.
How you enable LDAPS on AWS Microsoft AD
LDAP-aware applications (LDAP clients) typically access LDAP servers using Transmission Control Protocol (TCP) on port 389. By default, LDAP communications on port 389 are unencrypted. However, many LDAP clients use one of two standards to encrypt LDAP communications: LDAP over SSL on port 636, and LDAP with StartTLS on port 389. If an LDAP client uses port 636, the LDAP server encrypts all traffic unconditionally with SSL. If an LDAP client issues a StartTLS command when setting up the LDAP session on port 389, the LDAP server encrypts all traffic to that client with TLS. AWS Microsoft AD now supports both encryption standards when you enable LDAPS on your AWS Microsoft AD domain controllers.
You enable LDAPS on your AWS Microsoft AD domain controllers by installing a digital certificate that a CA issued. Though Windows servers have different methods for installing certificates, LDAPS with AWS Microsoft AD requires you to add a Microsoft CA to your AWS Microsoft AD domain and deploy the certificate through autoenrollment from the Microsoft CA. The installed certificate enables the LDAP service running on domain controllers to listen for and negotiate LDAP encryption on port 636 (LDAP over SSL) and port 389 (LDAP with StartTLS).
Background of CA deployment models
You can deploy CAs as part of a single-level or multi-level CA hierarchy. In a single-level hierarchy, all certificates come from the root of the hierarchy. In a multi-level hierarchy, you organize a collection of CAs in a hierarchy and the certificates sent to computers and users come from subordinate CAs in the hierarchy (not the root).
Certificates issued by a CA identify the hierarchy to which the CA belongs. When a computer sends its certificate to another computer for verification, the receiving computer must have the public certificate from the CAs in the same hierarchy as the sender. If the CA that issued the certificate is part of a single-level hierarchy, the receiver must obtain the public certificate of the CA that issued the certificate. If the CA that issued the certificate is part of a multi-level hierarchy, the receiver can obtain a public certificate for all the CAs that are in the same hierarchy as the CA that issued the certificate. If the receiver can verify that the certificate came from a CA that is in the hierarchy of the receiver’s “trusted” public CA certificates, the receiver trusts the sender. Otherwise, the receiver rejects the sender.
Deploying Microsoft CA to enable LDAPS on AWS Microsoft AD
Microsoft offers a standalone CA and an enterprise CA. Though you can configure either as single-level or multi-level hierarchies, only the enterprise CA integrates with AD and offers autoenrollment for certificate deployment. Because you cannot sign in to run commands on your AWS Microsoft AD domain controllers, an automatic certificate enrollment model is required. Therefore, AWS Microsoft AD requires the certificate to come from a Microsoft enterprise CA that you configure to work in your AD domain. When you install the Microsoft enterprise CA, you can configure it to be part of a single-level hierarchy or a multi-level hierarchy. As a best practice, AWS recommends a multi-level Microsoft CA trust hierarchy consisting of a root CA and a subordinate CA. I cover only a multi-level hierarchy in this post.
In a multi-level hierarchy, you configure your subordinate CA by importing a certificate from the root CA. You must issue a certificate from the root CA such that the certificate gives your subordinate CA the right to issue certificates on behalf of the root. This makes your subordinate CA part of the root CA hierarchy. You also deploy the root CA’s public certificate on all of your computers, which tells all your computers to trust certificates that your root CA issues and to trust certificates from any authorized subordinate CA.
In such a hierarchy, you typically leave your root CA offline (inaccessible to other computers in the network) to protect the root of your hierarchy. You leave the subordinate CA online so that it can issue certificates on behalf of the root CA. This multi-level hierarchy increases security because if someone compromises your subordinate CA, you can revoke all certificates it issued and set up a new subordinate CA from your offline root CA. To learn more about setting up a secure CA hierarchy, see Securing PKI: Planning a CA Hierarchy.
When a Microsoft CA is part of your AD domain, you can configure certificate templates that you publish. These templates become visible to client computers through AD. If a client’s profile matches a template, the client requests a certificate from the Microsoft CA that matches the template. Microsoft calls this process autoenrollment, and it simplifies certificate deployment. To enable LDAPS on your AWS Microsoft AD domain controllers, you create a certificate template in the Microsoft CA that generates SSL and TLS-compatible certificates. The domain controllers see the template and automatically import a certificate of that type from the Microsoft CA. The imported certificate enables LDAP encryption.
Steps to enable LDAPS for your AWS Microsoft AD directory
The rest of this post is composed of the steps for enabling LDAPS for your AWS Microsoft AD directory. First, though, I explain which components you must have running to deploy this solution successfully. I also explain how this solution works and include an architecture diagram.
Prerequisites
The instructions in this post assume that you already have the following components running:
An existing root Microsoft CA or a multi-level Microsoft CA hierarchy – You might already have a root CA or a multi-level CA hierarchy in your on-premises network. If you plan to use your on-premises CA hierarchy, you must have administrative permissions to issue certificates to subordinate CAs. If you do not have an existing Microsoft CA hierarchy, you can set up a new standalone Microsoft root CA by creating an Amazon EC2 for Windows Server instance and installing a standalone root certification authority. You also must create a local user account on this instance and add this user to the local administrator group so that the user has permissions to issue a certificate to a subordinate CA.
The solution setup
The following diagram illustrates the setup with the steps you need to follow to enable LDAPS for AWS Microsoft AD. You will learn how to set up a subordinate Microsoft enterprise CA (in this case, SubordinateCA) and join it to your AWS Microsoft AD domain (in this case, corp.example.com). You also will learn how to create a certificate template on SubordinateCA and configure AWS security group rules to enable LDAPS for your directory.
As a prerequisite, I already created a standalone Microsoft root CA (in this case RootCA) for creating SubordinateCA. RootCA also has a local user account called RootAdmin that has administrative permissions to issue certificates to SubordinateCA. Note that you may already have a root CA or a multi-level CA hierarchy in your on-premises network that you can use for creating SubordinateCA instead of creating a new root CA. If you choose to use your existing on-premises CA hierarchy, you must have administrative permissions on your on-premises CA to issue a certificate to SubordinateCA.
Lastly, I also already created an Amazon EC2 instance (in this case, Management) that I use to manage users, configure AWS security groups, and test the LDAPS connection. I join this instance to the AWS Microsoft AD directory domain.
Here is how the process works:
Delegate permissions to CA administrators (in this case, CAAdmin) so that they can join a Microsoft enterprise CA to your AWS Microsoft AD domain and configure it as a subordinate CA.
Add a Microsoft enterprise CA to your AWS Microsoft AD domain (in this case, SubordinateCA) so that it can issue certificates to your directory domain controllers to enable LDAPS. This step includes joining SubordinateCA to your directory domain, installing the Microsoft enterprise CA, and obtaining a certificate from RootCA that grants SubordinateCA permissions to issue certificates.
Create a certificate template (in this case, ServerAuthentication) with server authentication and autoenrollment enabled so that your AWS Microsoft AD directory domain controllers can obtain certificates through autoenrollment to enable LDAPS.
Configure AWS security group rules so that AWS Microsoft AD directory domain controllers can connect to the subordinate CA to request certificates.
I now will show you these steps in detail. I use the names of components—such as RootCA, SubordinateCA, and Management—and refer to users—such as Admin, RootAdmin, and CAAdmin—to illustrate who performs these steps. All component names and user names in this post are used for illustrative purposes only.
Deploy the solution
Step 1: Delegate permissions to CA administrators
In this step, you delegate permissions to your users who manage your CAs. Your users then can join a subordinate CA to your AWS Microsoft AD domain and create the certificate template in your CA.
To enable use with a Microsoft enterprise CA, AWS added a new built-in AD security group called AWS Delegated Enterprise Certificate Authority Administrators that has delegated permissions to install and administer a Microsoft enterprise CA. By default, your directory Admin is part of the new group and can add other users or groups in your AWS Microsoft AD directory to this security group. If you have trust with your on-premises AD directory, you can also delegate CA administrative permissions to your on-premises users by adding on-premises AD users or global groups to this new AD security group.
To create a new user (in this case CAAdmin) in your directory and add this user to the AWS Delegated Enterprise Certificate Authority Administrators security group, follow these steps:
Sign in to the Management instance using RDP with the user name admin and the password that you set for the admin user when you created your directory.
Launch the Microsoft Windows Server Manager on the Management instance and navigate to Tools > Active Directory Users and Computers.
Switch to the tree viewand navigate to corp.example.com>CORP > Users. Right-click Users and choose New > User.
Add a new user with the First nameCA, Last nameAdmin, and User logon nameCAAdmin.
In the Active Directory Users and Computers tool, navigate to corp.example.com> AWS Delegated Groups. In the right pane, right-click AWS Delegated Enterprise Certificate Authority Administrators and choose Properties.
In the AWS Delegated Enterprise Certificate Authority Administrators window, switch to the Members tab and choose Add.
In the Enter the object names to select box, type CAAdmin and choose OK.
In the next window, choose OK to add CAAdmin to the AWS Delegated Enterprise Certificate Authority Administrators security group.
Also add CAAdmin to the AWS Delegated Server Administrators security group so that CAAdmin can RDP in to the Microsoft enterprise CA machine.
You have granted CAAdmin permissions to join a Microsoft enterprise CA to your AWS Microsoft AD directory domain.
Step 2: Add a Microsoft enterprise CA to your AWS Microsoft AD directory
In this step, you set up a subordinate Microsoft enterprise CA and join it to your AWS Microsoft AD directory domain. I will summarize the process first and then walk through the steps.
First, you create an Amazon EC2 for Windows Server instance called SubordinateCA and join it to the domain, corp.example.com. You then publish RootCA’s public certificate and certificate revocation list (CRL) to SubordinateCA’s local trusted store. You also publish RootCA’s public certificate to your directory domain. Doing so enables SubordinateCA and your directory domain controllers to trust RootCA. You then install the Microsoft enterprise CA service on SubordinateCA and request a certificate from RootCA to make SubordinateCA a subordinate Microsoft CA. After RootCA issues the certificate, SubordinateCA is ready to issue certificates to your directory domain controllers.
Note that you can use an Amazon S3 bucket to pass the certificates between RootCA and SubordinateCA.
In detail, here is how the process works, as illustrated in the preceding diagram:
Set up an Amazon EC2 instance joined to your AWS Microsoft AD directory domain – Create an Amazon EC2 for Windows Server instance to use as a subordinate CA, and join it to your AWS Microsoft AD directory domain. For this example, the machine name is SubordinateCA and the domain is corp.example.com.
Share RootCA’s public certificate with SubordinateCA – Log in to RootCA as RootAdmin and start Windows PowerShell with administrative privileges. Run the following commands to copy RootCA’s public certificate and CRL to the folder c:\rootcerts on RootCA.
The following screenshot shows RootCA’s public certificate and CRL uploaded to an S3 bucket.
Publish RootCA’s public certificate to your directory domain – Log in to SubordinateCA as the CAAdmin. Download RootCA’s public certificate and CRL from the S3 bucket by following the instructions in How Do I Download an Object from an S3 Bucket? Save the certificate and CRL to the C:\rootcerts folder on SubordinateCA. Add RootCA’s public certificate and the CRL to the local store of SubordinateCA and publish RootCA’s public certificate to your directory domain by running the following commands using Windows PowerShell with administrative privileges.
certutil –addstore –f root <path to the RootCA public certificate file>
certutil –addstore –f root <path to the RootCA CRL file>
certutil –dspublish –f <path to the RootCA public certificate file> RootCA
Install the subordinate Microsoft enterprise CA – Install the subordinate Microsoft enterprise CA on SubordinateCA by following the instructions in Install a Subordinate Certification Authority. Ensure that you choose Enterprise CA for Setup Type to install an enterprise CA.
For the CA Type, choose Subordinate CA.
Request a certificate from RootCA – Next, copy the certificate request on SubordinateCA to a folder called c:\CARequest by running the following commands using Windows PowerShell with administrative privileges.
Approve SubordinateCA’s certificate request – Log in to RootCA as RootAdmin and download the certificate request from the S3 bucket to a folder called CARequest. Submit the request by running the following command using Windows PowerShell with administrative privileges.
certreq -submit <path to certificate request file>
In the Certification Authority List window, choose OK.
Navigate to Server Manager > Tools > Certification Authority onRootCA.
In the Certification Authority window, expand the ROOTCA tree in the left pane and choose Pending Requests. In the right pane, note the value in the Request ID column. Right-click the request and choose All Tasks > Issue.
Retrieve the SubordinateCA certificate – Retrieve the SubordinateCA certificate by running following command using Windows PowerShell with administrative privileges. The command includes the <RequestId> that you noted in the previous step.
Install the SubordinateCA certificate – Log in to SubordinateCA as the CAAdmin and download SubordinateCA.crt from the S3 bucket. Install the certificate by running following commands using Windows PowerShell with administrative privileges.
Delete the content that you uploaded to S3 –As a security best practice, delete all the certificates and CRLs that you uploaded to the S3 bucket in the previous steps because you already have installed them on SubordinateCA.
You have finished setting up the subordinate Microsoft enterprise CA that is joined to your AWS Microsoft AD directory domain. Now you can use your subordinate Microsoft enterprise CA to create a certificate template so that your directory domain controllers can request a certificate to enable LDAPS for your directory.
Step 3: Create a certificate template
In this step, you create a certificate template with server authentication and autoenrollment enabled on SubordinateCA. You create this new template (in this case, ServerAuthentication) by duplicating an existing certificate template (in this case, Domain Controller template) and adding server authentication and autoenrollment to the template.
Follow these steps to create a certificate template:
Log in to SubordinateCA as CAAdmin.
Launch Microsoft WindowsServer Manager. Select Tools > Certification Authority.
In the Certificate Authority window, expand the SubordinateCA tree in the left pane. Right-click Certificate Templates, and choose Manage.
In the Certificate Templates Console window, right-click Domain Controller and choose Duplicate Template.
In the Properties of New Template window, switch to the General tab and change the Template display name to ServerAuthentication.
Switch to the Security tab, and choose Domain Controllers in the Group or user names section. Select the Allow check box for Autoenroll in the Permissions forDomain Controllers section.
Switch to the Extensions tab, choose Application Policies in the Extensions included in this template section, and choose Edit
In the Edit Application Policies Extension window, choose Client Authentication and choose Remove. Choose OK to create the ServerAuthentication certificate template. Close the Certificate Templates Console window.
In the Certificate Authority window, right-click Certificate Templates, and choose New > Certificate Template to Issue.
In the Enable Certificate Templates window, choose ServerAuthentication and choose OK.
You have finished creating a certificate template with server authentication and autoenrollment enabled on SubordinateCA. Your AWS Microsoft AD directory domain controllers can now obtain a certificate through autoenrollment to enable LDAPS.
Step 4: Configure AWS security group rules
In this step, you configure AWS security group rules so that your directory domain controllers can connect to the subordinate CA to request a certificate. To do this, you must add outbound rules to your directory’s AWS security group (in this case, sg-4ba7682d) to allow all outbound traffic to SubordinateCA’s AWS security group (in this case, sg-6fbe7109) so that your directory domain controllers can connect to SubordinateCA for requesting a certificate. You also must add inbound rules to SubordinateCA’s AWS security group to allow all incoming traffic from your directory’s AWS security group so that the subordinate CA can accept incoming traffic from your directory domain controllers.
Follow these steps to configure AWS security group rules:
In the left pane, choose Network & Security > Security Groups.
In the right pane, choose the AWS security group (in this case, sg-6fbe7109) of SubordinateCA.
Switch to the Inbound tab and choose Edit.
Choose Add Rule. Choose All traffic for Type and Custom for Source. Enter your directory’s AWS security group (in this case, sg-4ba7682d) in the Source box. Choose Save.
Now choose the AWS security group (in this case, sg-4ba7682d) of your AWS Microsoft AD directory, switch to the Outbound tab, and choose Edit.
Choose Add Rule. Choose All traffic for Type and Custom for Destination. Enter your directory’s AWS security group (in this case, sg-6fbe7109) in the Destination box. Choose Save.
You have completed the configuration of AWS security group rules to allow traffic between your directory domain controllers and SubordinateCA.
Step 5: AWS Microsoft AD enables LDAPS
The AWS Microsoft AD domain controllers perform this step automatically by recognizing the published template and requesting a certificate from the subordinate Microsoft enterprise CA. The subordinate CA can take up to 180 minutes to issue certificates to the directory domain controllers. The directory imports these certificates into the directory domain controllers and enables LDAPS for your directory automatically. This completes the setup of LDAPS for the AWS Microsoft AD directory. The LDAP service on the directory is now ready to accept LDAPS connections!
Step 6: Test LDAPS access by using the LDP tool
In this step, you test the LDAPS connection to the AWS Microsoft AD directory by using the LDP tool. The LDP tool is available on the Management machine where you installed Active Directory Administration Tools. Before you test the LDAPS connection, you must wait up to 180 minutes for the subordinate CA to issue a certificate to your directory domain controllers.
To test LDAPS, you connect to one of the domain controllers using port 636. Here are the steps to test the LDAPS connection:
Log in to Management as Admin.
Launch the Microsoft WindowsServer Manager on Management and navigate to Tools > Active Directory Users and Computers.
Switch to the tree view and navigate to corp.example.com>CORP> Domain Controllers. In the right pane, right-click on one of the domain controllers and choose Properties. Copy the DNS name of the domain controller.
Launch the LDP.exe tool by launching Windows PowerShell and running the LDP.exe command.
In the LDP tool, choose Connection > Connect.
In the Server box, paste the DNS name you copied in the previous step. Type 636 in the Port box. Choose OK to test the LDAPS connection to port 636 of your directory.
You should see the following message to confirm that your LDAPS connection is now open.
You have completed the setup of LDAPS for your AWS Microsoft AD directory! You can now encrypt LDAP communications between your Windows and Linux applications and your AWS Microsoft AD directory using LDAPS.
Summary
In this blog post, I walked through the process of enabling LDAPS for your AWS Microsoft AD directory. Enabling LDAPS helps you protect PII and other sensitive information exchanged over untrusted networks between your Windows and Linux applications and your AWS Microsoft AD. To learn more about how to use AWS Microsoft AD, see the Directory Service documentation. For general information and pricing, see the Directory Service home page.
If you have comments about this blog post, submit a comment in the “Comments” section below. If you have implementation or troubleshooting questions, start a new thread on the Directory Service forum.
You can now enable your users to access Microsoft Office 365 with credentials that you manage in AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD. You can accomplish this by deploying Microsoft Azure Active Directory (AD) Connect and Active Directory Federation Services for Windows Server 2016 (AD FS 2016) with AWS Microsoft AD. AWS Microsoft AD makes it possible and easy for you to build a Windows environment in the AWS Cloud, synchronize your AWS Microsoft AD users into Microsoft Azure AD, and use Office 365, all without needing to create and manage AD domain controllers. Now you can also benefit from the broad set of AWS Cloud services for compute, storage, database, and Internet of Things (IoT) while continuing to use Office 365 business productivity apps—all with a single AD domain.
Office 365 provides different options to support user authentication with identities that come from AD. One common way to do this is to use Azure AD Connect and AD FS together with your AD directory. In this model, you use Azure AD Connect to synchronize user names from AD into Azure AD so that Office 365 can use those identities. To complete this solution, you use AD FS to enable Office 365 to authenticate the identities against your AD directory. Good news: AWS Microsoft AD now supports this model!
In this blog post, we show how to use Azure AD Connect and AD FS with AWS Microsoft AD so that your employees can access Office 365 by using their AD credentials.
Join an Amazon EC2 for Windows Server instance to the AWS Microsoft AD domain you use as your ADSync server. We will show you how to install Azure AD Connect on this instance later.
Using Active Directory Users and Computers on your Management instance, create a standard user named ADFSSVC in your AWS Microsoft AD directory. AD FS uses this user account later.
Note: You must use RDP and sign in with the AWS Microsoft AD admin account using the password you specified when you created your AWS Microsoft AD directory when performing Steps 3 and 6 in this “Prerequisites” section.
The following diagram illustrates the environment you must have in place to implement the solution in this blog post (the numbers in the diagram correspond to Steps 1–8 earlier in this section). We build on this configuration to install and configure Azure AD Connect and AD FS with Azure AD and Office 365.
Note: In this blog post, we use separate Microsoft Windows Server instances on which to run AD FS and Azure AD Connect. You can choose to combine these on a single server, as long as you use Windows Server 2016. Though it is technically possible to use an on-premises server as the AD FS and Azure AD host, such a configuration is counter to the idea of a Windows environment completely in the cloud. Also, this requires configuration of firewall ports and AWS security groups, which is beyond the scope of this blog.
Configuration background
When you create an AWS Microsoft AD directory, AWS exclusively retains the enterprise administrator account of the forest and domain administrator account for the root domain to deliver the directory as a managed service. When you set up your directory, AWS creates an organizational unit (OU) in the directory and delegates administrative privileges for the OU to your admin account. Within this OU, you administer users, groups, computers, Group Policy objects, other devices, and additional OUs as needed. You perform these actions using standard AD administration tools from a computer that is joined to an AWS Microsoft AD domain. Typically, the administration computer is an EC2 instance that you access using RDP, by logging in with your admin account credentials. From your admin account, you can also delegate permissions to other users or groups you create within your OU.
To use Office 365 with AD identities, you use Azure AD Connect to synchronize the AD identities into Azure AD. There are two commonly supported ways to use Azure AD Connect to support Office 365 use. In one model, you synchronize user names only, and you use AD FS to federate authentication from Office 365 to your AD. In the second model, you synchronize user names and passwords from your AD directory to Azure AD, and you do not have to use AD FS. The model supported by AWS Microsoft AD is the first model: synchronize user names only and use AD FS to authenticate from Office 365 to your AWS Microsoft AD. The AD FS model also enables authentication with SaaS applications that support federated authentication (this topic is beyond the scope of this blog post).
Note: Azure AD Connect now has a pass-through model of authentication. Because this was in a preview status at the time of writing this blog post, this authentication model is beyond the scope of this blog post.
In a default AD FS installation, AD FS uses two containers that require special AD permissions that your AWS Microsoft AD administrative account does not have. To address this, you will create two nested containers in your OU for AD FS to use. When you install AD FS, you tell AD FS where to find the containers through a Windows PowerShell parameter.
As described previously, we will now show you how to use Azure AD Connect and AD FS with AWS Microsoft AD with Azure AD and Office 365 in five steps, as illustrated in the following diagram.
Add two containers to AWS Microsoft AD for use by AD FS.
Install AD FS.
Integrate AD FS with Azure AD.
Synchronize users from AWS Microsoft AD to Azure AD with Azure AD Connect.
Sign in to Office 365 by using your Microsoft AD identities.
Step 1: Add two containers to AWS Microsoft AD for use by AD FS
The following steps show how to create the AD containers required by AD FS in your AWS Microsoft AD directory.
From the Management instance:
Generate a random global unique identifier (GUID) using the following Windows PowerShell command.
(New-Guid).Guid
Make a note of the GUID output because it will be required later on. In this case, the GUID is 67734c62-0805-4274-b72b-f7171110cd56.
Create a container named ADFS in your OU. The OU is located in the domain root and it has the same name as the NetBIOS name you specified when you created your AWS Microsoft AD directory. In this example, our OU name is AWS, and our domain is DC=awsexample,DC=com. You create the container by running the following Windows PowerShell command. You must replace the names that are in bold text with the names from your AWS Microsoft AD directory.
Create another AD container in your new ADFS container, and use the previously generated GUID as the name. Do this by running the following Windows PowerShell command. Be sure to replace the names in bold text with the names from your AWS Microsoft AD directory and your GUID. In this example, we replace GUID with 67734c62-0805-4274-b72b-f7171110cd56. The other bold items shown match the names in our example AWS Microsoft AD directory.
To verify that you successfully created the ADFS and GUID containers, open Active Directory Users and Computers and navigate to the containers you created. Your root domain, OU name, and GUID name should match your AWS Microsoft AD configuration.
Note: If you do not see the ADFS and GUID containers, turn on Advanced Features by choosing View in the Active Directory Users and Computers tool, and then choosing Advanced Features.
Step 2: Install AD FS
In this section, we show how to install AD FS by using Windows PowerShell commands. First, though, select a federation service name for your AD FS server. You can create your federation service name by adding a short name (for example, sts) followed by your domain name (for example, awsexample.com). In this example, we use sts.awsexample.com as the federation service name.
Using your AWS Microsoft AD admin account, open an RDP session to your ADFS instance, run Windows PowerShell as a local administrator, and complete the following steps:
Install the Windows feature, AD FS, by running the following Windows PowerShell command. This command only adds the components needed to install your ADFS server later.
Install-WindowsFeature ADFS-Federation
Now that you have installed AD FS, you must obtain a certificate for use when you configure your ADFS server. The AD FS certificate plays an important role to secure communication between the ADFS server and clients, and to ensure tokens issued by the ADFS server are secured. AWS recommends that you use a certificate from a trusted Certificate Authority (CA).
In our example, we use the SSL certificate, sts.awsexample.com. It is important to note that the common name and subject alternative name (SAN) must include the federation service name we plan to use for the AD FS server. In our example, the name is sts.awsexample.com.
Choose File, choose Add/Remove snap-in, and choose Add.
For Add StandaloneSnap-in, choose Certificates and then choose Add.
For the Certificates snap-in, choose Computer account and then choose Next.
Choose Finish, and then choose OK to load the Certificates snap In.
Expand Certificates (Local Computer).
Right-click Personal, choose All Tasks, and then choose Import.
On the Certificate Import Wizard, choose Next.
Choose Browse to locate and select your certificate that has been given by your CA. Choose Next.
Ensure Certificate store is set to Personal, and choose Next.
Choose Finish and OK to complete the installation of the certificate on the AD FS server.
Next you need to retrieve the Thumbprint value of the newly installed certificate and save it for use when you configure your ADFS server. Follow the remaining steps:
In the Certificates console window, expand Personal, and choose Certificates.
Right-click the certificate, and then choose Open.
Choose the Details tab to locate the Thumbprint
Note: In this case, we will copy our certificate Thumbprint, d096652327cfa18487723ff61040c85c7f57f701, and save it in Windows Notepad.
Open an RDP session to your ADFS server by using the admin account for your AWS Microsoft AD directory. Install AD FS by running the following Windows PowerShell command. You must replace the bold strings in the command with the GUID you created in Step 1 and the names from your AWS Microsoft AD directory.
Enter the AD FS standard user account credentials for the ADFSSVC user and save it in the script variable, $svcCred, by running the following Windows PowerShell command.
$svcCred = (get-credential)
Type the Microsoft AD administrator credentials of the Admin user and save it in the script variable, $localAdminCred, by running the following Windows PowerShell command.
$localAdminCred = (get-credential)
Install the AD FS server by running the following Windows PowerShell command. You must replace the bold items with the Thumbprint ID from your certificate, and replace the federation service name with the federation service name you chose earlier. For our example, the federation service name is awsexample.com and we copy our certificate Thumbprint, d096652327cfa18487723ff61040c85c7f57f701, from where we saved it in Windows Notepad.
Note: Be sure to remove any empty spaces in the certificate Thumbprint value.
Create a DNS A record for use with AD FS. This record resolves the federation service name to the public IP address you assign to your ADFS instance. You must create the DNS A record at the DNS hosting provider that hosts your domain. In the following example, sts.awsexample.com is the federation service name and 54.x.x.x is the public IP address of our AD FS instance.
Hostname: awsexample.com
Record Type:A
IP Address:x.x.x
Enable the AD FS sign-in page by running the following Windows PowerShell command.
To verify that the AD FS sign-in page works, open a browser on the AD FS instance, and sign in on the AD FS sign-in page (https://<myfederation service name>/AD FS/ls/IdpInitiatedSignOn.aspx) by using your AWS Microsoft AD admin account. In our example, the federation service name (<my federation service name> in the sign-in page URL) is sts.awsexample.com.
Step 3: Integrate AD FS with Azure AD
The following steps show you how to connect AD FS with Office 365 by connecting to Azure AD with Windows PowerShell and federating the custom domain.From the ADFS instance, make sure you run Windows PowerShell as a local administrator and complete the following steps:
Connect to Azure AD using Windows PowerShell. Federate the custom domain you added and verified in Azure AD by running the following two Windows PowerShell commands. You must update the items in bold text with the names from your AWS Microsoft AD directory. For our example, our AD FS instance’s Fully Qualified Domain Name (FQDN) is adfsserver.awsexample.com, and our domain name is awsexample.com.
Step 4: Synchronize users from AWS Microsoft AD to Azure AD with Azure AD Connect
The following steps show you how to install and customize Azure AD Connect to synchronize your AWS Microsoft AD identities to Azure AD for use with Office 365.Open an RDP session to your ADSync instance by using your AWS Microsoft AD admin user account:
On the Welcome page of the Azure AD Connect Wizard, accept the license terms and privacy notice, and then choose Continue.
On the Express Settings page, choose Customize.
On the Install required components page, choose Install.
On the User sign-in page, choose Do not configure and then choose Next.
On the Connect to Azure AD page, enter your Office 365 global administrator account credentials and then choose Next.
On the Connect your directories page, choose Active Directory as the Directory Type, and then choose your Microsoft AD Forest as your Forest. Choose Add Directory.
At the prompt, enter your AWS Microsoft AD admin account credentials, and then choose OK.
Now that you have added the AWS Microsoft AD directory, choose Next.
On the Azure AD sign-in configuration page, choose Next.
Note: AWS recommends the userPrincipalName (UPN) attribute for use by AWS Microsoft AD users when they sign in to Azure AD and Office 365. The UPN attribute format combines the user’s login name and the UPN-suffix of an AWS Microsoft AD user. The UPN suffix is the domain name of your AWS Microsoft AD domain and the same domain name you added and verified with Azure AD.
In the following example from the Active Directory Users and Computers tool, the user’s UPN is [email protected], which is a combination of the user’s login name, awsuser, with the UPN-suffix, @awsexample.com.
On the Domain and OU filtering page, choose Sync selected domains and OUs, choose the Users OU under your NetBIOS OU, and then choose Next.
On the Uniquely identifying your users page, choose Next.
On the Filter users and devices page, choose Next.
On the Optional features page, choose Next.
On the Ready to configure page, choose Start the synchronization process when configuration completes, and then choose Install.
The Azure AD Connect installation has now completed. Choose Exit.
Note: By default, the Azure AD Connect sync scheduler runs every 30 minutes to synchronize your AWS Microsoft AD identities to Azure AD. You can tune the scheduler by opening a Windows PowerShell session as an administrator and running the appropriate Windows PowerShell commands. For more information, go to Azure AD Connect Sync Scheduler.
Tip: Do you need to synchronize a change immediately? You can manually start a sync cycle outside the scheduled sync cycle from the Azure AD Connect sync instance. Open a Windows PowerShell session as an administrator and run the following Windows PowerShell commands.
Step 5: Sign in to Office 365 by using your AWS Microsoft AD identities
The following steps show you how to sign in to Office 365 using AD FS as the authentication method with your AWS Microsoft AD user account. In this example, we assign a license to the AWS Microsoft AD user account, [email protected], in the Office 365 admin center. We then sign in to Office 365 by using the AWS Microsoft AD user account UPN, [email protected].
Using a computer on the internet, open a browser and complete the following steps:
Sign in with the AWS Microsoft AD user account at https://portal.office.com. When entering the UPN of the AWS Microsoft AD user account, you will be redirected to your ADFS server sign-in page to complete user authentication.
On the AD FS sign-in page, enter your UPN and the password of the AWS Microsoft AD user account.
You have successfully signed in to Office 365 using your AWS Microsoft AD user account!
Summary
In this blog post, we showed how to use Azure AD Connect and AD FS with AWS Microsoft AD so that your employees can access Office 365 using their AD credentials. Now that you have Azure AD Connect and AD FS in place, you also might want to explore how to build upon this infrastructure to add sign-in for other Software as a Service (SaaS) applications that are compatible with AD FS. For example, this blog post explains how you can provide your users single sign-on access to Amazon AppStream by using AD FS.
In a prior post, Disabling Intel Hyper-Threading on Amazon Linux, I investigated how the Linux kernel enumerates CPUs. I also discussed the options to disable Intel Hyper-Threading (HT Technology) in Amazon Linux running on Amazon EC2.
In this post, I do the same for Microsoft Windows Server 2016 running on EC2 instances. I begin with a quick review of HT Technology and the reasons you might want to disable it. I also recommend that you take a moment to review the prior post for a more thorough foundation.
HT Technology
HT Technology makes a single physical processor appear as multiple logical processors. Each core in an Intel Xeon processor has two threads of execution. Most of the time, these threads can progress independently; one thread executing while the other is waiting on a relatively slow operation (for example, reading from memory) to occur. However, the two threads do share resources and occasionally one thread is forced to wait while the other is executing.
There a few unique situations where disabling HT Technology can improve performance. One example is high performance computing (HPC) workloads that rely heavily on floating point operations. In these rare cases, it can be advantageous to disable HT Technology. However, these cases are rare, and for the overwhelming majority of workloads you should leave it enabled. I recommend that you test with and without HT Technology enabled, and only disable threads if you are sure it will improve performance.
Exploring HT Technology on Microsoft Windows
Here’s how Microsoft Windows enumerates CPUs. As before, I am running these examples on an m4.2xlarge. I also chose to run Windows Server 2016, but you can walk through these exercises on any version of Windows. Remember that the m4.2xlarge has eight vCPUs, and each vCPU is a thread of an Intel Xeon core. Therefore, the m4.2xlarge has four cores, each of which run two threads, resulting in eight vCPUs.
Windows does not have a built-in utility to examine CPU configuration, but you can download the Sysinternals coreinfo utility from Microsoft’s website. This utility provides useful information about the system CPU and memory topology. For this walkthrough, you enumerate the individual CPUs, which you can do by running coreinfo -c. For example:
C:\Users\Administrator >coreinfo -c
Coreinfo v3.31 - Dump information on system CPU and memory topology
Copyright (C) 2008-2014 Mark Russinovich
Sysinternals - www.sysinternals.com
Logical to Physical Processor Map:
**------ Physical Processor 0 (Hyperthreaded)
--**---- Physical Processor 1 (Hyperthreaded)
----**-- Physical Processor 2 (Hyperthreaded)
------** Physical Processor 3 (Hyperthreaded)
As you can see from the screenshot, the coreinfo utility displays a table where each row is a physical core and each column is a logical CPU. In other words, the two asterisks on the first line indicate that CPU 0 and CPU 1 are the two threads in the first physical core. Therefore, my m4.2xlarge has for four physical processors and each processor has two threads resulting in eight total CPUs, just as expected.
It is interesting to note that Windows Server 2016 enumerates CPUs in a different order than Linux. Remember from the prior post that Linux enumerated the first thread in each core, followed by the second thread in each core. You can see from the output earlier that Windows Server 2016, enumerates both threads in the first core, then both threads in the second core, and so on. The diagram below shows the relationship of CPUs to cores and threads in both operating systems.
In the Linux post, I disabled CPUs 4–6, leaving one thread per core, and effectively disabling HT Technology. You can see from the diagram that you must disable the odd-numbered threads (that is, 1, 3, 5, and 7) to achieve the same result in Windows. Here’s how to do that.
Disabling HT Technology on Microsoft Windows
In Linux, you can globally disable CPUs dynamically. In Windows, there is no direct equivalent that I could find, but there are a few alternatives.
First, you can disable CPUs using the msconfig.exe tool. If you choose Boot, Advanced Options, you have the option to set the number of processors. In the example below, I limit my m4.2xlarge to four CPUs. Restart for this change to take effect.
Unfortunately, Windows does not disable hyperthreaded CPUs first and then real cores, as Linux does. As you can see in the following output, coreinfo reports that my c4.2xlarge has two real cores and four hyperthreads, after rebooting. Msconfig.exe is useful for disabling cores, but it does not allow you to disable HT Technology.
Note: If you have been following along, you can re-enable all your CPUs by unselecting the Number of processors check box and rebooting your system.
C:\Users\Administrator >coreinfo -c
Coreinfo v3.31 - Dump information on system CPU and memory topology
Copyright (C) 2008-2014 Mark Russinovich
Sysinternals - www.sysinternals.com
Logical to Physical Processor Map:
**-- Physical Processor 0 (Hyperthreaded)
--** Physical Processor 1 (Hyperthreaded)
While you cannot disable HT Technology systemwide, Windows does allow you to associate a particular process with one or more CPUs. Microsoft calls this, “processor affinity”. To see an example, use the following steps.
Launch an instance of Notepad.
Open Windows Task Manager and choose Processes.
Open the context (right click) menu on notepad.exe and choose Set Affinity….
This brings up the Processor Affinity dialog box.
As you can see, all the CPUs are allowed to run this instance of notepad.exe. You can uncheck a few CPUs to exclude them. Windows is smart enough to allow any scheduled operations to continue to completion on disabled CPUs. It then saves its state at the next scheduling event, and resumes those operations on another CPU. To ensure that only one thread in each core is able to run a process, you uncheck every other core. This effectively disables HT Technology for this process. For example:
Of course, this can be tedious when you have a large number of cores. Remember that the x1.32xlarge has 128 CPUs. Luckily, you can set the affinity of a running process from PowerShell using the Get-Process cmdlet. For example:
The ProcessorAffinity attribute takes a bitmask in hexadecimal format. 0x55 in hex is equivalent to 01010101 in binary. Think of the binary encoding as 1=enabled and 0=disabled. This is slightly confusing, but we work left to right so that CPU 0 is the rightmost bit and CPU 7 is the leftmost bit. Therefore, 01010101 means that the first thread in each CPU is enabled just as it was in the diagram earlier.
The calculator built into Windows includes a “programmer view” that helps you convert from hexadecimal to binary. In addition, the ProcessorAffinity attribute is a 64-bit number. Therefore, you can only configure the processor affinity on systems up to 64 CPUs. At the moment, only the x1.32xlarge has more than 64 vCPUs.
In the preceding examples, you changed the processor affinity of a running process. Sometimes, you want to start a process with the affinity already configured. You can do this using the start command. The start command includes an affinity flag that takes a hexadecimal number like the PowerShell example earlier.
It is interesting to note that a child process inherits the affinity from its parent. For example, the following commands create a batch file that launches Notepad, and starts the batch file with the affinity set. If you examine the instance of Notepad launched by the batch file, you see that the affinity has been applied to as well.
This means that you can set the affinity of your task scheduler and any tasks that the scheduler starts inherits the affinity. So, you can disable every other thread when you launch the scheduler and effectively disable HT Technology for all of the tasks as well. Be sure to test this point, however, as some schedulers override the normal inheritance behavior and explicitly set processor affinity when starting a child process.
Conclusion
While the Windows operating system does not allow you to disable logical CPUs, you can set processor affinity on individual processes. You also learned that Windows Server 2016 enumerates CPUs in a different order than Linux. Therefore, you can effectively disable HT Technology by restricting a process to every other CPU. Finally, you learned how to set affinity of both new and running processes using Task Manager, PowerShell, and the start command.
Note: this technical approach has nothing to do with control over software licensing, or licensing rights, which are sometimes linked to the number of “CPUs” or “cores.” For licensing purposes, those are legal terms, not technical terms. This post did not cover anything about software licensing or licensing rights.
If you have questions or suggestions, please comment below.
Today we’re excited to announce the general availability of Amazon EC2 Elastic GPUs for Windows. An Elastic GPU is a GPU resource that you can attach to your Amazon Elastic Compute Cloud (EC2) instance to accelerate the graphics performance of your applications. Elastic GPUs come in medium (1GB), large (2GB), xlarge (4GB), and 2xlarge (8GB) sizes and are lower cost alternatives to using GPU instance types like G3 or G2 (for OpenGL 3.3 applications). You can use Elastic GPUs with many instance types allowing you the flexibility to choose the right compute, memory, and storage balance for your application. Today you can provision elastic GPUs in us-east-1 and us-east-2.
Elastic GPUs start at just $0.05 per hour for an eg1.medium. A nickel an hour. If we attach that Elastic GPU to a t2.medium ($0.065/hour) we pay a total of less than 12 cents per hour for an instance with a GPU. Previously, the cheapest graphical workstation (G2/3 class) cost 76 cents per hour. That’s over an 80% reduction in the price for running certain graphical workloads.
When should I use Elastic GPUs?
Elastic GPUs are best suited for applications that require a small or intermittent amount of additional GPU power for graphics acceleration and support OpenGL. Elastic GPUs support up to and including the OpenGL 3.3 API standards with expanded API support coming soon.
Elastic GPUs are not part of the hardware of your instance. Instead they’re attached through an elastic GPU network interface in your subnet which is created when you launch an instance with an Elastic GPU. The image below shows how Elastic GPUs are attached.
Since Elastic GPUs are network attached it’s important to provision an instance with adequate network bandwidth to support your application. It’s also important to make sure your instance security group allows traffic on port 2007.
Any application that can use the OpenGL APIs can take advantage of Elastic GPUs so Blender, Google Earth, SIEMENS SolidEdge, and more could all run with Elastic GPUs. Even Kerbal Space Program!
Ok, now that we know when to use Elastic GPUs and how they work, let’s launch an instance and use one.
Using Elastic GPUs
First, we’ll navigate to the EC2 console and click Launch Instance. Next we’ll select a Windows AMI like: “Microsoft Windows Server 2016 Base”. Then we’ll select an instance type. Then we’ll make sure we select the “Elastic GPU” section and allocate an eg1.medium (1GB) Elastic GPU.
We’ll also include some userdata in the advanced details section. We’ll write a quick PowerShell script to download and install our Elastic GPU software.
This software sends all OpenGL API calls to the attached Elastic GPU.
Next, we’ll double check to make sure my security group has TCP port 2007 exposed to my VPC so my Elastic GPU can connect to my instance. Finally, we’ll click launch and wait for my instance and Elastic GPU to provision. The best way to do this is to create a separate SG that you can attach to the instance.
You can see an animation of the launch procedure below.
Alternatively we could have launched on the AWS CLI with a quick call like this:
then we could have followed the Elastic GPU software installation instructions here.
We can now see our Elastic GPU is humming along and attached by checking out the Elastic GPU status in the taskbar.
We welcome any feedback on the service and you can click on the Feedback link in the bottom left corner of the GPU Status Box to let us know about your experience with Elastic GPUs.
Elastic GPU Demonstration
Ok, so we have our instance provisioned and our Elastic GPU attached. My teammates here at AWS wanted me to talk about the amazingly wonderful 3D applications you can run, but when I learned about Elastic GPUs the first thing that came to mind was Kerbal Space Program (KSP), so I’m going to run a quick test with that. After all, if you can’t launch Jebediah Kerman into space then what was the point of all of that software? I’ve downloaded KSP and added the launch parameter of -force-opengl to make sure we’re using OpenGL to do our rendering. Below you can see my poor attempt at building a spaceship – I used to build better ones. It looks pretty smooth considering we’re going over a network with a lossy remote desktop protocol.
I’d show a picture of the rocket launch but I didn’t even make it off the ground before I experienced a rapid unscheduled disassembly of the rocket. Back to the drawing board for me.
In the mean time I can check my Amazon CloudWatch metrics and see how much GPU memory I used during my brief game.
Partners, Pricing, and Documentation
To continue to build out great experiences for our customers, our 3D software partners like ANSYS and Siemens are looking to take advantage of the OpenGL APIs on Elastic GPUs, and are currently certifying Elastic GPUs for their software. You can learn more about our partnerships here.
You can find information on Elastic GPU pricing here. You can find additional documentation here.
Now, if you’ll excuse me I have some virtual rockets to build.
Want to provide users with single sign-on access to AppStream 2.0 using existing enterprise credentials? Active Directory Federation Services (AD FS) 3.0 can be used to provide single sign-on for Amazon AppStream 2.0 using SAML 2.0.
You can use your existing Active Directory or any SAML 2.0–compliant identity service to set up single sign-on access of AppStream 2.0 applications for your users. Identity federation using SAML 2.0 is currently available in all AppStream 2.0 regions.
This post explains how to configure federated identities for AppStream 2.0 using AD FS 3.0.
Walkthrough
After setting up SAML 2.0 federation for AppStream 2.0, users can browse to a specially crafted (AD FS RelayState) URL and be taken directly to their AppStream 2.0 applications.
When users sign in with this URL, they are authenticated against Active Directory. After they are authenticated, the browser receives a SAML assertion as an authentication response from AD FS, which is then posted by the browser to the AWS sign-in SAML endpoint. Temporary security credentials are issued after the assertion and the embedded attributes are validated. The temporary credentials are then used to create the sign-in URL. The user is redirected to the AppStream 2.0 streaming session. The following diagram shows the process.
The user browses to https://applications.exampleco.com. The sign-on page requests authentication for the user.
The federation service requests authentication from the organization’s identity store.
The identity store authenticates the user and returns the authentication response to the federation service.
On successful authentication, the federation service posts the SAML assertion to the user’s browser.
The user’s browser posts the SAML assertion to the AWS Sign-In SAML endpoint (https://signin.aws.amazon.com/saml). AWS Sign-In receives the SAML request, processes the request, authenticates the user, and forwards the authentication token to the AppStream 2.0 service.
Using the authentication token from AWS, AppStream 2.0 authorizes the user and presents applications to the browser.
In this post, use domain.local as the name of the Active Directory domain. Here are the steps in this walkthrough:
Configure AppStream 2.0 identity federation.
Configure the relying trust.
Create claim rules.
Enable RelayState and forms authentication.
Create the AppStream 2.0 RelayState URL and access the stack.
Test the configuration.
Prerequisites
This walkthrough assumes that you have the following prerequisites:
An instance joined to a domain with the “Active Directory Federation Services” role installed and post-deployment configuration completed
Familiarity with AppStream 2.0 resources
Configure AppStream 2.0 identity federation
First, create an AppStream 2.0 stack, as you reference the stack in upcoming steps. Name the stack ExampleStack. For this walkthrough, it doesn’t matter which underlying fleet you associate with the stack. You can create a fleet using one of the example Amazon-AppStream2-Sample-Image images available, or associate an existing fleet to the stack.
Get the AD FS metadata file
The first thing you need is the metadata file from your AD FS server. The metadata file is a signed document that is used later in this guide to establish the relying party trust. Don’t edit or reformat this file.
To download and save this file, navigate to the following location, replacing <FQDN_ADFS_SERVER> with your AD FS s fully qualified server name.
In the IAM console, choose Identity providers, Create provider.
On the Configure Provider page, for Provider Type, choose SAML. For Provider Name, type ADFS01 or similar name. Choose Choose File to upload the metadata document previously downloaded. Choose Next Step.
Verify the provider information and choose Create.
You need the Amazon Resource Name (ARN) of the identity provider (IdP) to configure claims rules later in this walkthrough. To get this, select the IdP that you just created. On the summary page, copy the value for Provider ARN. The ARN is in the following format:
Next, configure a policy with permissions to the AppStream 2.0 stack. This is the level of permissions that federated users have within AWS.
In the IAM console, choose Policies, Create Policy, Create Your Own Policy.
For Policy Name, enter a descriptive name. For Description, enter the level of permissions. For Policy Document, you customize the Region-Code, AccountID (without hyphens), and case-sensitive Stack-Name values.
For Region Codes, use one of the following values based on the region you are using AppStream 2.0 (the available regions for AppStream 2.0):
us-east-1
us-west-2
eu-west-1
ap-northeast-1
Choose Create Policy and you should see the following notification:
Create an IAM role
Here, you create a role that relates to an Active Directory group assigned to your AppStream 2.0 federated users. For this configuration, Active Directory groups and AWS roles are case-sensitive. Here you create an IAM Role named “ExampleStack” and an Active Directory group named in the format AWS-AccountNumber-RoleName, for example AWS-012345678910-ExampleStack.
In the IAM console, choose Roles, Create new role.
On the Select Role type page, choose Role for identity provider access. Choose Select next to Grant Web Single Sign-On (WebSSO) access to SAML providers.
On the Establish Trust page, make sure that the SAML provider that you just created (such as ADFS01) is selected. For Attribute and Value, keep the default values.
On the Verify Role Trust page, the Federated value matches the ARN noted previously for the principal IdP created earlier. The SAML: aud value equals https://signin.aws.amazon.com/saml, as shown below. This is prepopulated and does not require any change. Choose Next Step.
On the Attach policy page, attach the policy that you created earlier granting federated users access only to the AppStream 2.0 stack. In this walkthrough, the policy was named AppStream2_ExampleStack.
After selecting the correct policy, choose Next Step.
On the Set role name and review page, name the role ExampleStack. You can customize this naming convention, as I explain later when I create the claims rules.
You can describe the role as desired. Ensure that the trusted entities match the AD FS IdP ARN, and that the policy attached is the policy created earlier granting access only to this stack.
Choose Create Role.
Important: If you grant more than the stack permissions to federated users, you can give them access to other areas of the console as well. AWS strongly recommends that you attach policies to a role that grants access only to the resources to be shared with federated users.
For example, if you attach the AdministratorAccess policy instead of AppStream2_ExampleStack, any AppStream 2.0 federated user in the ExampleStack Active Directory group has AdministratorAccess in your AWS account. Even though AD FS routes users to the stack, users can still navigate to other areas of the console, using deep links that go directly to specific console locations.
Next, create the Active Directory group in the format AWS-AccountNumber-RoleName using the “ExampleStack” role name that you just created. You reference this Active Directory group in the AD FS claim rules later using regex. For Group scope, choose Global. For Group type, choose Security
Note: To follow this walkthrough exactly, name your Active Directory group in the format “AWS-AccountNumber-ExampleStack” replacing AccountNumber with your AWS AccountID (without hyphens). For example:
AWS-012345678910-ExampleStack
Configure the relying party trust
In this section, you configure AD FS 3.0 to communicate with the configurations made in AWS.
Open the AD FS console on your AD FS 3.0 server.
Open the context (right-click) menu for AD FS and choose Add Relying Party Trust…
On the Welcome page, choose Start. On the Select Data Source page, keep Import data about the relying party published online or on a local network checked. For Federation metadata address (host name or URL), type the following link to the SAML metadata to describe AWS as a relying party and then choose Next.
On the Specify Display Name page, for Display name, type “AppStream 2.0 – ExampleStack” or similar value. For Notes, provide a description. Choose Next.
On the Configure Multi-factor Authentication Now? page, choose I do not want to configure multi-factor authentication settings for this relying party trust at this time. Choose Next.
Because you are controlling access to the stack using an Active Directory group, and IAM role with an attached policy, on the Choose Issuance Authorization Rules page, check Permit all users to access this relying party. Choose Next.
On the Ready to Add Trust page, there shouldn’t be any changes needed to be made. Choose Next.
On the Finish page, clear Open the edit Claim Rules dialog for this relying party trust when the wizard closes. You open this later.
Next, you add the https://signin.aws.amazon.com/saml URL is listed on the Identifiers tab within the properties of the trust. To do this, open the context (right-click) menu for the relying party trust that you just created and choose Properties.
On the Monitoring tab and clear Monitor relying party. Choose Apply. On the Identifiers tab, for Relying party identifier, add https://signin.aws.amazon.com/saml and choose OK.
Create claim rules
In this section, you create four AD FS claim rules, which identify accounts, set LDAP attributes, get the Active Directory groups, and match them to the role created earlier.
In the AD FS console, expand Trust Relationships, choose Relying Party Trusts, and then select the relying party trust that you just created (in this case, the display name is AppStream 2.0 – ExampleStack). Open the context (right-click) menu for the relying party trust and choose Edit Claim Rules. Choose Add Rule.
Rule 1: Name ID
This claim rule tells AD FS the type of expected incoming claim and how to send the claim to AWS. AD FS receives the UPN and tags it as the Name ID when it’s forwarded to AWS. This rule interacts with the third rule, which fetches the user groups.
Claim rule template: Transform an Incoming Claim
Configure Claim Rule values:
Claim Rule Name: Name ID
Incoming Claim Type: UPN
Outgoing Claim Type: Name ID
Outgoing name ID format: Persistent Identifier
Pass through all claim values: selected
Rule 2: RoleSessionName
This rule sets a unique identifier for the user. In this case, use the E-Mail-Addresses values.
Claim rule template: Send LDAP Attributes as Claims
This rule converts the value of the Active Directory group starting with AWS-AccountNumber prefix to the roles known by AWS. For this rule, you need the AWS IdP ARN that you noted earlier. If your IdP in AWS was named ADFS01 and the AccountID was 012345678910, the ARN would look like the following:
arn:aws:iam::012345678910:saml-provider/ADFS01
Claim rule template: Send Claims Using a Custom Rule
Configure Claim Rule values:
Claim Rule Name: Roles
Custom Rule:
c:[Type == "http://temp/variable", Value =~ "(?i)^AWS-"]
=> issue(Type = "https://aws.amazon.com/SAML/Attributes/Role", Value = RegExReplace(c.Value, "AWS-012345678910-", "arn:aws:iam::012345678910:saml-provider/ADFS01,arn:aws:iam::019517892450:role/"));
Change arn:aws:iam::012345678910:saml-provider/ADFS01 to the ARN of your AWS IdP
Change 012345678910 to the ID (without hyphens) of the AWS account.
In this walkthrough, “AWS-” returns the Active Directory groups that start with the AWS- prefix, then removes AWS-012345678910- leaving ExampleStack left on the Active Directory Group name to match the ExampleStack IAM role. To customize the role naming convention, for example to name the IAM Role ADFS-ExampleStack, add “ADFS-” to the end of the role ARN at the end of the rule: arn:aws:iam::012345678910:role/ADFS-.
You should now have four claims rules created:
NameID
RoleSessionName
Get Active Directory Groups
Role
Enable RelayState and forms authentication
By default, AD FS 3.0 doesn’t have RelayState enabled. AppStream 2.0 uses RelayState to direct users to your AppStream 2.0 stack.
On your AD FS server, open the following with elevated (administrator) permissions:
In the Microsoft.IdentityServer.Servicehost.exe.config file, find the section <microsoft.identityServer.web>. Within this section, add the following line:
In the AD FS console, verify that forms authentication is enabled. Choose Authentication Policies. Under Primary Authentication, for Global Settings, choose Edit.
For Extranet, choose Forms Authentication. For Intranet, do the same and choose OK.
On the AD FS server, from an elevated (administrator) command prompt, run the following commands sequentially to stop, then start the AD FS service to register the changes:
net stop adfssrv
net start adfssrv
Create the AppStream 2.0 RelayState URL and access the stack
Now that RelayState is enabled, you can generate the URL.
I have created an Excel spreadsheet for RelayState URL generation, available as RelayGenerator.xlsx. This spreadsheet only requires the fully qualified domain name for your AD FS server, account ID (without hyphens), stack name (case-sensitive), and the AppStream 2.0 region. After all the inputs are entered, the spreadsheet generates a URL in the blue box, as shown in the screenshot below. Copy the entire contents of the blue box to retrieve the generated RelayState URL for AD FS.
Alternatively, if you do not have Excel, there are third-party tools for RelayState URL generation. However, they do require some customization to work with AppStream 2.0. Example customization steps for one such tool are provided below.
CodePlex has an AD FS RelayState generator, which downloads an HTML file locally that you can use to create the RelayState URL. The generator says it’s for AD FS 2.0; however, it also works for AD FS 3.0. You can generate the RelayState URL manually but if the syntax or case sensitivity is incorrect even slightly, it won’t work. I recommend using the tool to ensure a valid URL.
When you open the URL generator, clear out the default text fields. You see a tool that looks like the following:
To generate the values, you need three pieces of information:
IDP URL String
Relying Party Identifier
Relay State / Target App
IDP URL String
The IDP URL string is the URL you use to hit your AD FS sign-on page. For example:
Ultimately, the URL looks like the following example, which is for us-east-1, with a stack name of ExampleStack, and an account ID of 012345678910. The stack name is case-sensitive.
The generated RelayState URL can now be saved and used by users to log in directly from anywhere that can reach the AD FS server, using their existing domain credentials. After they are authenticated, users are directed seamlessly to the AppStream 2.0 stack.
Test the configuration
Create a new AD user in Domain.local named Test User, with a username TUser and an email address. An email address is required based on the claim rules.
Next, add TUser to the AD group you created for the AWS-012345678910-ExampleStack stack.
Next, navigate to the RelayState URL and log in with domain\TUser.
After you log in, you are directed to the streaming session for the ExampleStack stack. As an administrator, you can disassociate and associate different fleets of applications to this stack, without impacting federation, and deliver different applications to this group of federated users.
Because the policy attached to the role only allows access to this AppStream 2.0 stack, if a federated user were to try to access another section of the console, such as Amazon EC2, they would discover that they are not authorized to see (describe) any resources or perform any actions, as shown in the screenshot below. This is why it’s important to grant access only to the AppStream 2.0 stack.
Configurations for AD FS 4.0
If you are using AD FS 4.0, there are a few differences from the procedures discussed earlier.
Do not customize the following file as described in the Enable RelayState and forms authentication of the AD FS 3.0 guide:
Enable the IdP-initiated sign-on page that is used when generating the RelayState URL. To do this, open an elevated PowerShell terminal and run the following command:
To register these changes with AD FS, restart the AD FS service from an elevated PowerShell terminal (or command prompt):
net stop adfssrv
net start adfssrv
After these changes are made, AD FS 4.0 should now work for AppStream 2.0 identity federation.
Troubleshooting
If you are still encountering errors with your setup, below are common error messages you may see, and configuration areas that I recommend that you check.
Invalid policy
Unable to authorize the session. (Error Code: INVALID_AUTH_POLICY);Status Code:401
This error message can occur when the IAM policy does not permit access to the AppStream 2.0 stack. However, it can also occur when the stack name is not entered into the policy or RelayState URL using case-sensitive characters. For example, if your stack name is “ExampleStack” in AppStream 2.0 and the policy has “examplestack” or if the Relay State URL has “examplestack” or any capitalization pattern other than the exact stack name, you see this error message.
Invalid relay state
Error: Bad Request.(Error Code: INVALID_RELAY_STATE);Status Code:400
If you are receiving this error message, there is likely to be another issue in the Relay State URL. It could be related to case sensitivity (other than the stack name). For example, https://relay-state-region-endoint?stack=stackname&accountId=aws-account-id-without-hyphens.
Unable to authorize the session. Cross account access is not allowed. (Error Code: CROSS_ACCOUNT_ACCESS_NOT_ALLOWED);Status Code:401
If you see this error message, check to make sure that the AccountId number is correct in the Relay State URL.
Summary
In this post, you walked through enabling AD FS 3.0 for AppStream 2.0 identity federation. You should now be able to configure AD FS 3.0 or 4.0 for AppStream 2.0 identity federation. If you have questions or suggestions, please comment below.
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.