Tag Archives: AWS Directory Service

How to prompt users to reset their AWS Managed Microsoft AD passwords proactively

Post Syndicated from Tekena Orugbani original https://aws.amazon.com/blogs/security/how-to-prompt-users-to-reset-their-aws-managed-microsoft-ad-passwords-proactively/

If you’re an AWS Directory Service administrator, you can reset your directory users’ passwords from the AWS console or the CLI when their passwords expire. However, you can improve your efficiency by reducing the number of requests for password resets. You can also help improve the security of your organization by having your users proactively reset their directory passwords before they expire. In this post, I describe the steps you can take to set up a solution to send regular reminders to your AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) users to prompt them to change their password before it expires. This will help prevent users from being locked out when their passwords expire and also reduce the number of reset requests sent to administrators.

Solution Overview

When users’ passwords expire, they typically contact their directory service administrator to help them reset their password. For security reasons, they then need to reset their password again on their computer so that the administrator has no knowledge of the new password. This process is time-consuming and impacts productivity. In this post, I present a solution to remind users automatically to reset AWS Managed Microsoft AD passwords. The following diagram and description explains how the solution works.
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. A script running on an AWS Managed Microsoft AD domain-joined Amazon Elastic Compute Cloud (Amazon EC2) instance (Notification Server) searches the AWS Managed Microsoft AD for all enabled user accounts and retrieves their names, email addresses, and password expiry dates.
  2. Using the permissions of the IAM role attached to the Notification Server, the script obtains the SES SMTP credentials stored in AWS Secrets Manager.
  3. With the SMTP credentials obtained in Step 2, the script then securely connects to Amazon Simple Email Service (Amazon SES.)
  4. Based on your preferences, Amazon SES sends domain password expiry notifications to the users’ mailboxes.

A separate process for updating the SES credentials stored in AWS Secrets Manager occurs as follows:

  1. A CloudWatch rule triggers a Lambda function.
  2. The Lambda function generates new SES SMTP credentials from the SES IAM Username.
  3. The Lambda function then updates AWS Secrets Manager with the new SES credentials.
  4. The Lambda function then deletes the previous IAM access key.

Prerequisites

The instructions in this post assume that you’re familiar with how to create Amazon EC2 for Windows Server instances, use Remote Desktop Protocol (RDP) to log in to the instances, and have completed the following tasks:

  1. Create an AWS Microsoft AD directory.
  2. Join an Amazon EC2 for Windows Server instance to the AWS Microsoft AD domain to use as your Notification Server.
  3. Sign up for Amazon Simple Email Service (Amazon SES).
  4. Remove Amazon EC2 throttling on port 25 for your EC2 instance.
  5. Remove your Amazon SES account from the Amazon SES sandbox so you can also send email to unverified recipients.

Note: You can use your AWS Microsoft Directory management instance as the Notification Server. For the steps below, use any account that is a member of the AWS delegated Administrators’ group.

Summary of the steps

  1. Verify an Amazon SES email address.
  2. Create Amazon SES SMTP credentials.
  3. Store the Amazon SES SMTP credentials in AWS Secrets Manager.
  4. Create an IAM role with read permissions to the secret in AWS Secrets Manager.
  5. Set up and test the notification script.
  6. Set up Windows Task Scheduler.
  7. Configure automatic rotation of the SES Credentials stored in Secrets Manager.

STEP 1: Verify an Amazon SES email address

To prevent unauthorized use, Amazon SES requires that you verify the email address that you use as a “From,” “Source,” “Sender,” or “Return-Path”.

To verify the email address you will use as the sending address, complete the following steps:

  1. Sign in to the Amazon SES console.
  2. In the navigation pane, under Identity Management, select Email Addresses.
  3. Select Verify a New Email Address, and then enter the email address.
  4. Select Verify This Email Address.

An email will be sent to the specified email address with a link to verify the email address. Once you verify the email, you’ll see the Verification Status as verified in the SES console.

In the image below, I have four verified email addresses:
 

Figure 2: Verified email addresses

Figure 2: Verified email addresses

STEP 2: Create Amazon SES SMTP credentials

You must create an Amazon SES SMTP user name and password to access the Amazon SES SMTP interface and send email using the service. To do this, complete the following steps:

  1. Sign in to the Amazon SES console.
  2. In the navigation bar, select SMTP Settings.
  3. In the content pane, make a note of the Server Name as you will use this when sending the email in Step 5. Select Create My SMTP Credentials.
     
    Figure 3: Make a note of the SES SMTP Server Name

    Figure 3: Make a note of the SES SMTP Server Name

  4. Specify a value for the IAM User Name field. Make a note of this IAM User Name as you will need in Step 7 later. In this post, I use the placeholder, ses-smtp-user-eu-west-1, as the user name (as shown below):
     
    Figure 4: Make a note of SES IAM User Name

    Figure 4: Make a note of SES IAM User Name

  5. Select Create.

Make a note of the SMTP Username and SMTP Password you created because you’ll use these in later steps. This is as shown below in my example.
 

Figure 5: Make a note of the SES SMTP Username and SMTP Password

Figure 5: Make a note of the SES SMTP Username and SMTP Password

STEP 3: Store the Amazon SES SMTP credentials in AWS Secrets Manager

In this step, use AWS Secrets Manager to store the Amazon SES SMTP credentials created in Step 2. You will reference this credential when you execute the script in the Notification Server.

Complete the following steps to store the Amazon SES SMTP credentials in AWS Secrets Manager:

  1. Sign in to the AWS Secrets Manager Console.
  2. Select Store a new secret, and then select Other types of secrets.
  3. Under Secret Key/value, enter the Amazon SES SMTP Username in the left box and the Amazon SES SMTP Password in the right box, and then select Next.
     
    Figure 6: Enter the Amazon SES SMTP user name and password

    Figure 6: Enter the Amazon SES SMTP user name and password

  4. In the next screen, enter the string AWS-SES as the name of the secret. Enter an optional description for the secret and add an optional tag and select Next.

    Note: I recommend using AWS-SES as the name of your secret. If you choose to use some other name, you will have to update PowerShell script in Step 5. I also recommend creating the secret in the same region as the Notification Server. If you create your secret in a different region, you will also have to update PowerShell script in Step 5.

     

    Figure 7: Enter "AWS-SES" as the secret name

    Figure 7: Enter “AWS-SES” as the secret name

  5. On next screen, leave the default setting as Disable automatic rotation and select Next. You will come back later in Step 7 where you will use a Lambda function to rotate the secret at specified intervals.
  6. To store the secret, in the last screen, select Store. Now select the secret and make a note of the ARN of the secret as shown in in Figure 8.
     
    Figure 8: Make a note of the Secret ARN

    Figure 8: Make a note of the Secret ARN

Step 4: Create IAM role with permissions to read the secret

Create an IAM role that grants permissions to read the secret created in Step 3. Then, attach this role to the Notification Server to enable your script to read this secret. Complete the following steps:

  1. Log in to the IAM Console.
  2. In the navigation bar, select Policies.
  3. In the content pane, select Create Policy, and then select JSON.
  4. Replace the content with the following snippet while specifying the ARN of the secret you created earlier in step 3:
    
        {
            "Version": "2012-10-17",
            "Statement": {
                "Effect": "Allow",
                "Action": "secretsmanager:GetSecretValue",
                "Resource": "<arn-of-the-secret-created-in-step-3>"
            }
        }                
        

    Here is how it looks in my example after I replace with the ARN of my Secrets Manager secret:
     

    Figure 9: Example policy

    Figure 9: Example policy

  5. Select Review policy.
  6. On the next screen, specify a name for the policy. In my example, I have specified Access-Ses-Secret as the name of the policy. Also specify a description for the policy, and then select Create policy.
  7. In the navigation pane, select Roles.
  8. In the content pane, select Create role.
  9. On the next page, select EC2, and then select Next: Permissions.
  10. Select the policy you created, and then select Next: Tags.
  11. Select Next: Review, provide a name for the role. In my example, I have specified SecretsManagerReadAccessRole as the name. Select Create Role.

Now, complete the following steps to attach the role to the Notification Server:

  1. From the Amazon EC2 Console, select the Notification Server instance.
  2. Select Actions, select Instance Settings, and then select Attach/Replace IAM Role.
     
    Figure 10: Select "Attach/Replace IAM Role"

    Figure 10: Select “Attach/Replace IAM Role”

  3. On the Attach/Replace IAM Role page, choose the role to attach from the drop-down list. For this post, I choose SecretsManagerReadAccessRole and select Apply.

    Here is how it looks in my example:
     

    Figure 11: Example "Attach/Replace IAM Role"

    Figure 11: Example “Attach/Replace IAM Role”

STEP 5: Setup and Test the Notification Script

In this section, you’re going to test the script by sending a sample notification email to an end user to remind the user to change their password. To test the script, log into your Notification Server using your AWS Microsoft Managed AD default Admin account. Then, complete the following steps:

  1. Install the PowerShell Module for Active Directory by opening PowerShell as Administrator and run the following command:

    Install-WindowsFeature -Name RSAT-AD-PowerShell

  2. Download the script to the Notification Server. In my example, I downloaded the script and stored in the location

    c:\scripts\PasswordExpiryNotify.ps1

  3. Create a new user in Active Directory and ensure you enter a valid email address for the new user.

    Note: Make sure to clear the User must change password at next logon check box when creating the user; otherwise, you will get an invalid output from the command in the next step.

    For this example, I created a test user named RandomUser in Active Directory.

  4. In the PowerShell Window, execute the following command to determine the number of days remaining before the password for the user expires. In this example, I run the following to determine the number of days remaining before the RandomUser account password expires:

    (New-TimeSpan -Start ((Get-Date).ToLongDateString()) -End ((Get-ADUser -Identity ‘RandomUser’ -Properties “msDS-UserPasswordExpiryTimeComputed”|Select @{Name=”exp”;Expression={[datetime]::FromFileTime($_.”msDS-UserPasswordExpiryTimeComputed”).tolongdatestring()}}) | Select -ExpandProperty exp)).Days

    In my example, I get “15” as the output.

  5. To test the script, navigate to the location of the script on your Notification Server and execute the following:

    .\PasswordExpiryNotify.ps1 -smtpServer “<SES-SMTP-SERVER-NAME-NOTED-IN-STEP 2> ” -from “<SENDER LABEL> <SES VERIFIED EMAIL ADDRESS>” -NotifyDays <NUMBER OF DAYS>

    In this example, I navigate to c:\scripts\ and execute:

    .\PasswordExpiryNotify.ps1 -smtpServer “email-smtp.eu-west-1.amazonaws.com” -from “IT Servicedesk [email protected]” -NotifyDays 15

A new email will be sent to user’s mailbox. Verify the user has received the email.

Note: I can update these instructions to send multiple email reminders to users. For example, if I want to notify users on three occasions (first notification 15 days before password expiration, then 7 days, and one more when there is only 1 day) I would execute the following:

.\PasswordExpiryNotify.ps1 -smtpServer “email-smtp.eu-west-1.amazonaws.com” -from “IT Servicedesk <[email protected]>” -NotifyDays 1,7,15

Step 6: Set up a Windows Task Scheduler

Now that you have tested the script and confirmed that the solution is working as expected, you can set up a Windows Scheduled Task to execute the script daily. To do this:

  1. Open Task Scheduler.
  2. Right-click Task Scheduler Library, and then select Create Task.
  3. Specify a name for the task.
  4. On the Triggers tab, select New.
  5. Select Daily, and then select OK.
  6. On the Actions tab, select New.
  7. Inside Program/Script, type PowerShell.exe
  8. In the Add arguments (optional) box, type the following command, including the full path to the script.

    “C:\Scripts\PasswordExpiryNotify.ps1 -smtpServer “<SES-SMTP-SERVER-NAME-NOTED-IN-STEP 2>” -from “<SENDER LABEL> <SES VERIFIED EMAIL ADDRESS>” -NotifyDays <DAY,DAY,DAY>

    In my example, I type the following:

    “C:\Scripts\PasswordExpiryNotify.ps1 -smtpServer ’email-smtp.eu-west-1.amazonaws.com’ -from ‘IT Servicedesk [email protected]’ -NotifyDays 1,7,15”

  9. Select OK twice, and then enter your password when prompted to complete the steps.

The script will now run daily at the specified time and will send password expiration email notifications to your AWS Managed Microsoft AD users. In my example, a password expiration reminder email is sent to my AWS Managed Microsoft AD users 15 days before expiration, 7 days before expiration, and then 1 day before expiration.

Here is a sample email:
 

Figure 12: Sample password expiration email

Figure 12: Sample password expiration email

Note: You can edit the script to change the notification message to suit your requirements.

Step 7: Configure automatic update of the SES credentials

In this final section, you’re going to setup the configuration to automatically update the secret (that is, the SES credentials stored in AWS Secrets Manager) at regular intervals. To achieve this, you will use an Amazon Lambda function that will do the following:

  1. Create a new access key using the IAM user you used to create the SES SMTP Credentials in Step 2 (ses-smtp-user-eu-west-1 in my example).
  2. Generate a new SES SMTP User password from the created IAM secret access key.
  3. Update the SES credentials stored in AWS Secrets Manager.
  4. Delete the old IAM access key.

Complete the following steps to enable automatic update of the SES credentials:

First you will create the IAM policy which you will attach to a role that will be assumed by the lambda function. This policy will provide the permissions to create new access keys for the SES IAM user and permissions to update the SES credentials stored in AWS Secrets Manager.

  1. Log in to the IAM Console, and in the navigation bar, select Policies.
  2. In the content pane, select Create Policy, and then select JSON.
  3. Replace the content with the following script while specifying the ARN of the IAM user we used to create the SES SMTP credentials in Step 2 and the ARN of the secret stored in Secrets Manager that you noted in Step 3.
    
        {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Action": "iam:*AccessKey*",
                    "Resource": "<arn-of-iam-user-created-in-step-2>"
                },
                {
                    "Effect": "Allow",
                    "Action": "secretsmanager:UpdateSecret",
                    "Resource": "<arn-of-secret-stored-in-secret-manager>"
                }
                ]
        }              
        

    Here is the JSON for the policy in my example:
     

    Figure 13: Example policy

    Figure 13: Example policy

  4. Select Review Policy, and then specify a name and a description for the policy. In my example, I have specified the name of the policy as iam-secretsmanager-access-for-lambda.

    Here is how it looks in my example:
     

    Figure 14: Specify a name and description for the policy

    Figure 14: Specify a name and description for the policy

  5. Select Create Policy

Now, create an IAM role and attach this policy.

  1. In the navigation bar, select Roles and select Create Role.
  2. Under the Choose the service that will use this role, select Lambda, and then select Next: Permissions.
  3. On the next page, select the policy you just created and select Next: Tags. Add an optional tag and select Next: Review.
  4. Specify a name for the role and description, and then select Create role. In my example, I have named the role: LambdaRoleRotatateSesSecret.

Now, you will create a Lambda function that will assume the created role:

  1. Log on to the AWS Lambda console and select Create Function
  2. Specify a name for the function, and then, under Runtime, select Python 3.7.
  3. Under execution role, select User an existing role, and then select the role you created earlier.

    Here are the settings I used in my example:
     

    Figure 15: Settings on the "Create function" page

    Figure 15: Settings on the “Create function” page

  4. Select Create function, copy the following Python code, and then paste it in the Function Code section.
    
        import boto3
        import os      #required to fetch environment variables
        import hmac    #required to compute the HMAC key
        import hashlib #required to create a SHA256 hash
        import base64  #required to encode the computed key
        import sys     #required for system functions
        
        iam = boto3.client('iam')
        sm = boto3.client('secretsmanager')
        
        SES_IAM_USERNAME = os.environ['SES_IAM_USERNAME']
        SECRET_ID = os.environ['SECRET_ID']
        
        def lambda_handler(event, context):
            print("Getting current credentials...")
            old_key = iam.list_access_keys(UserName=SES_IAM_USERNAME)['AccessKeyMetadata'][0]['AccessKeyId']
        
            print("Creating new credentials...")
            new_key = iam.create_access_key(UserName=SES_IAM_USERNAME)
            print("New credentials created...")
            
            smtp_username = '%s' % (new_key['AccessKey']['AccessKeyId'])
            iam_sec_access_key = '%s' % (new_key['AccessKey']['SecretAccessKey'])
            
             
            # These variables are used when calculating the SMTP password.
            message = 'SendRawEmail'
            version = '\x02'
            
            # Compute an HMAC-SHA256 key from the AWS secret access key.
            signatureInBytes = hmac.new(iam_sec_access_key.encode('utf-8'),message.encode('utf-8'),hashlib.sha256).digest()
            # Prepend the version number to the signature.
            signatureAndVersion = version.encode('utf-8') + signatureInBytes
            # Base64-encode the string that contains the version number and signature.
            smtpPassword = base64.b64encode(signatureAndVersion)
            # Decode the string and print it to the console.
            ses_smtp_pass = smtpPassword.decode('utf-8')
            secret_string = '{"%s": "%s"}' % (new_key['AccessKey']['AccessKeyId'], ses_smtp_pass)
            print("Updating credentials in SecretsManager...")
            sm_res = sm.update_secret(
                SecretId=SECRET_ID,
                SecretString=secret_string
                )
            print(sm_res)
            
            print("Deleting old key")
            del_res = iam.delete_access_key(
                UserName=SES_IAM_USERNAME,
                AccessKeyId=old_key
                )
                print(del_res) 
        

    Here is what it will look like:
     

    Figure 16: The Python code pasted in the "Function Code" section

    Figure 16: The Python code pasted in the “Function Code” section

  5. In the Environment variables section, specify the two environment variables required by the Lambda Python code as follows:
    
                SECRET_ID: AWS-SES
                SES_IAM_USERNAME: <SES-IAM-USERNAME-NOTED-IN-STEP 2>  
        

    Here is how my environment variables look:
     

    Figure 17: The Python code pasted in the "Function Code" section page

    Figure 17: The Python code pasted in the “Function Code” section

  6. Select Save.

    You have now created a Lambda function that can update the SES credentials stored in AWS Secrets Manager.

    You will now set up CloudWatch to trigger the Lambda function at scheduled intervals.

  7. Open the Amazon CloudWatch Console.
  8. In the navigation pane, select Rules and, in the content pane, select Create Rule.
  9. Under Event Source, select Schedule, and then select Fixed rate of. Specify how often you would like CloudWatch to trigger the Lambda function. In my example, I have chosen to update the SES credentials every 30 days.
  10. Under Targets, select Add Target, and then select Lambda Function.
  11. In Function, select the Lambda function you just created, and then select Configure details.
     
    Figure 18: Create new CloudWatch rule

    Figure 18: Create new CloudWatch rule

  12. Specify a name for the rule, enter a description, make sure the State check box is selected, and then select Create rule.

The SES credentials stored in AWS Secrets Manager will now be updated based on the scheduled intervals you specified in CloudWatch.

Conclusion

In this post, I showed how you can set up a solution to remind your AWS Directory Service for Microsoft Active Directory users to change their passwords before expiration. I demonstrated how you can achieve this using a combination of a script and Amazon SES. I also showed you how you can configure rotation of the Amazon SES credentials on your preferred schedule.

If you have comments about this post, submit them in the “Comments” section below. If you have questions or suggestions, please start a new thread on the Amazon SES forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tekena Orugbani

Tekena is a Cloud Support Engineer at the AWS Cape Town office. He has many years of experience working with Windows Systems, virtualization/cloud technologies, and directory services. When he’s not helping customers make the most of their cloud investments, he enjoys hanging out with his family and watching Premier League football (soccer).

New Whitepaper: Active Directory Domain Services on AWS

Post Syndicated from Vinod Madabushi original https://aws.amazon.com/blogs/architecture/new-whitepaper-active-directory-domain-services-on-aws/

The cloud is now at the center of most Enterprise IT strategies. As such, a well-planned move to the cloud can result in immediate business payoff. To achieve such success, it’s important that you adopt Microsoft Active Directory (AD), the foundation of many large enterprise Windows and .NET applications in a secure, scalable, and highly available manner within the AWS Cloud.

AWS offers flexible options for running AD, so as a customer it’s essential to select an architecture well-suited to support your applications. AWS offers a fully managed option called AWS Managed Active Directory, which enables your directory-aware workloads to use Managed Active Directory in AWS. You can also run Active Directory on Amazon Elastic Compute Cloud (Amazon EC2) and manage both the EC2 Instances and Active Directory, which provides the flexibility needed to extend an existing Active Directory domain to the AWS infrastructure.

In this regard, we are very excited to release Active Directory Domain Services for AWS Whitepaper. This Active Directory whitepaper describes best practices for running Active Directory on AWS, including different architectural approaches for running AWS Managed AD and Active Directory on EC2 Instances. In addition, this document discusses the design considerations, security, network connectivity, and multi-region deployment of Active Directory for both scenarios.

Read the whitepaper: Active Directory on AWS.

About the author

Vinod MadabushiVinod Madabushi is an Enterprise Solutions Architect and subject matter expert in Microsoft technologies, including Active Directory. He works with customers on building highly available, scalable, and resilient applications on AWS Cloud. He’s passionate about solving technology challenges and helping customers with their cloud journey.

 

How to enable secure access to Kibana using AWS Single Sign-On

Post Syndicated from Remek Hetman original https://aws.amazon.com/blogs/security/how-to-enable-secure-access-to-kibana-using-aws-single-sign-on/

Amazon Elasticsearch Service (Amazon ES) is a fully managed service to search, analyze, and visualize data in real-time. The service offers integration with Kibana, an open-source data visualization and exploration tool that lets you perform log and time-series analytics and application monitoring.

Many enterprise customers who want to use these capabilities find it challenging to secure access to Kibana. Kibana users have direct access to data stored in Amazon ES—so it’s important that only authorized users have access to Kibana. Data stored in Amazon ES can also have different classifications. For example, you might have one domain that stores confidential data and another that stores public data. In this case, securing access requires you not only to prevent unauthorized users from accessing the data but also to grant different groups of users access to different data classifications.

In this post, I’ll show you how to secure access to Kibana through Amazon Single Sign-On (AWS SSO) so that only users authenticated to Microsoft Active Directory can access and visualize data stored in Amazon ES. AWS SSO uses standard identity federation via SAML similar to Microsoft ADFS or Ping Federation. AWS SSO integrates with AWS Managed Microsoft Active Directory or Active Directory hosted on-premises or EC2 Instance through AWS Active Directory Connector, which means that your employees can sign into the AWS SSO user portal using their existing corporate Active Directory credentials. In addition, I’ll show you how to map users between an Amazon ES domain and a specific Active Directory security group so that you can limit who has access to a given Amazon ES domain.

Prerequisites and assumptions

You need the following for this walkthrough:

Solution overview

The architecture diagram below illustrates how the solution will authenticate users into Kibana:
 

Figure 1: Architectural diagram

Figure 1: Architectural diagram

  1. The user requests accesses to Kibana
  2. Kibana sends an HTML form back to the browser with a SAML request for authentication from Cognito. The HTML form is automatically posted to Cognito. User is prompted to then select SSO and authentication request is passed to SSO.
  3. AWS SSO sends a challenge to the browser for credentials
  4. User logs in to AWS SSO. AWS SSO authenticates the user against AWS Directory Service. AWS Directory Service may in turn authenticate the user against an on premise Active Directory.
  5. AWS SSO sends a SAML response to the browser
  6. Browser POSTs the response to Cognito. Amazon Cognito validates the SAML response to verify that the user has been successfully authenticated and then passes the information back to Kibana.
  7. Access to Kibana and Elasticsearch is granted

Deployment and configuration

In this section, I’ll show you how to deploy and configure the security aspects described in the solution overview.

Amazon Cognito authentication for Kibana

First, I’m going to highlight some initial configuration settings for Amazon Cognito and Amazon ES. I’ll show you how to create a Cognito user pool, a user pool domain, and an identity pool, and then how to configure Kibana authentication under Elasticsearch. For each of the commands, remember to replace the placeholders with your own values.

If you need more details on how to set up Amazon Cognito authentication for Kibana, please refer to the service documentation.

  1. Create an Amazon Cognito user pool with the following command:

    aws cognito-idp create-user-pool –pool-name <pool name, for example “Kibana”>

    From the output, copy down the user pool id. You’ll need to provide it in a couple of places later in the process.

    
                    }
                    "CreationDate": 1541690691.411,
                    "EstimatedNumberOfUsers": 0,
                    "Id": "us-east-1_0azgJMX31",
                    "LambdaConfig": {}
                }
            

  2. Create a user pool domain:

    aws cognito-idp create-user-pool-domain –domain <domain name>–user-pool-id <pool id created in step 1>

    The user pool domain name MUST be the same as your Amazon Elasticsearch domain name. If you receive an error that “domain already exists,” it means the name is already in use and you must choose a different name.

  3. Create your Amazon Cognito federated identities:

    aws cognito-identity create-identity-pool –identity-pool-name <identity pool name e.g. Kibana> –allow-unauthenticated-identities

    To make this command work, you have to temporally allow unauthenticated access by adding –allow-unauthenticated-identities. Unauthenticated access will be removed by Amazon Elasticsearch upon enabling Kibana authentication in the next step.

  4. Create an Amazon Elasticsearch domain. To do so, from the AWS Management Console, navigate to Amazon Elasticsearch and select Create a new domain.
    1. Make sure that value enter under “Elasticsearch domain name” match with the domain created under Cognito User Pool.
    2. Under Kibana authentication, complete the form with the following values, as shown in the screenshot:
      • For Cognito User Pool, enter the name of the pool you created in step one.
      • For Cognito Identity Pool, enter the identity you created in step three.
         
        Figure 2: Enter the identity you created in step three

        Figure 2: Enter the identity you created in step three

  5. Now you’re ready to assign IAM roles to your identity pool. Those roles will be saved with your identity pool and whenever Cognito receive a request to authorize a user, it will automatically utilize these roles
    1. From the AWS Management Console, go to Amazon Cognito and select Manage Identity Pools.
    2. Select the identity pool you created in step three.
    3. You should receive the following message: You have not specified roles for this identity pool. Click here to fix it. Follow the link.
       
      Figure 3: Follow the "Click here to fix it" link

      Figure 3: Follow the “Click here to fix it” link

    4. Under Edit identity pool, next to Unauthenticated role, select Create new role.
    5. Select Allow and save your changes.
    6. Next to Unauthenticated role, select Create new role.
    7. Select Allow and save your changes.
  6. Finally, modify the Amazon Elasticsearch access policy:
    1. From the AWS Management Console, go to AWS Identity and Access Management (IAM).
    2. Search for the authenticated role you created in step five and copy the role ARN.
    3. From the mangement console, go to Amazon Elasticsearch Service, and then select the domain you created in step four.
    4. Select Modify access policy and add the following policy (replace the ARN of the authenticated role and the domain ARN with your own values):
      
                      {
                          "Effect": "Allow",
                          "Principal": {
                              "AWS": "<ARN of Authenticated role>"
                          },
                          "Action": "es:ESHttp*",
                          "Resource": "<Domain ARN/*>"
                      }
                      

      Note: For more information about the Amazon Elasticsearch Service access policy visit: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-ac.html

Configuring AWS Single Sign-On

In this section, I’ll show you how to configure AWS Single Sign-On. In this solution, AWS SSO is used not only to integrate with Microsoft AD but also as a SAML 2.0 identity federation provider. SAML 2.0 is an industry standard used for securely exchanging SAML assertions that pass information about a user between a SAML authority (in this case, Microsoft AD), and a SAML consumer (in this case, Amazon Cognito).

Add Active Directory

  1. From the AWS Management Console, go to AWS Single Sign-On.
  2. If this is the first time you’re configuring AWS SSO, you’ll be asked to enable AWS SSO. Follow the prompt to do so.
  3. From the AWS SSO Dashboard, select Manage your directory.
     
    Figure 4: Select Manage your directory

    Figure 4: Select Manage your directory

  4. Under Directory, select Change directory.
     
    Figure 5: Select "Change directory"

    Figure 5: Select “Change directory”

  5. On the next screen, select Microsoft AD Directory, select the directory you created under AWS Directory Service as a part of prerequisites, and then select Next: Review.
     
    Figure 6: Select "Microsoft AD Directory" and then select the directory you created as a part of the prerequisites

    Figure 6: Select “Microsoft AD Directory” and then select the directory you created as a part of the prerequisites

  6. On the Review page, confirm that you want to switch from an AWS SSO directory to an AWS Directory Service directory, and then select Finish.
    1. Once setup is complete, select Proceed to the directory.

Add application

  1. From AWS SSO Dashboard, select Applications and then Add a new application. Select Add a custom SAML 2.0 application.
     
    Figure 7: Select "Application" and then "Add a new application"

    Figure 7: Select “Application” and then “Add a new application”

  2. Enter a display name for your application (for example, “Kibana”) and scroll down to Application metadata. Select the link that reads If you don’t have a metadata file, you can manually type your metadata values.
  3. Enter the following values, being sure to replace the placeholders with your own values:
    1. Application ACS URL: https://<Elasticsearch domain name>.auth..amazoncognito.com/saml2/idpresponse
    2. Application SAML audience: urn:amazon:cognito:sp:<user pool id>
  4. Select Save changes.
     
    Figure 8: Select "Save changes"

    Figure 8: Select “Save changes”

Add attribute mappings

Switch to the Attribute mappings tab and next to Subject, enter ${user:name} and select unspecified under Format as shown in the following screenshot. Click Save Changes.
 

Figure 9: Enter "${user:name}" and select "Unspecified"

Figure 9: Enter “${user:name}” and select “Unspecified”

For more information about attribute mappings visit: https://docs.aws.amazon.com/singlesignon/latest/userguide/attributemappingsconcept.html

Grant access to Kibana

To manage who has access to Kibana, switch to the Assigned users tab and select Assign users. Add individual users or groups.

Download SAML metadata

Next, you’ll need to download the Amazon SSO SAML metadata. The SAML metadata contains information such as SSO entity ID, public certificate, attributes schema, and other information that’s necessary for Cognito to federate with a SAML identity provider. To download the metadata .xml file, switch to the Configuration tab and select Download metadata file.
 

Figure 10: Select "Download metadata file"

Figure 10: Select “Download metadata file”

Adding an Amazon Cognito identity provider

The last step is to add the identity provider to the user pool.

  1. From the AWS Management Console, go to Amazon Cognito.
    1. Select Manage User Pools, and then select the user pool you created in the previous section.
    2. From the left side menu, under Federation, select Identity providers, and then select SAML.
    3. Select Select file, and then select the Amazon SSO metadata .xml file you downloaded in previous step.
    4.  

      Figure 11: Select "Select file" and select the Amazon SSO metadata .xml file you downloaded in previous step

      Figure 11: Select “Select file” and then select the Amazon SSO metadata .xml file you downloaded in previous step

    5. Enter the provider name (for example, “AWS SSO”), and then select Create provider.
  2. From the left side menu, under App integration, select App client settings.
  3. Uncheck Cognito User Pool, check the name of provider you created in step one, and select Save Changes.
     
    Figure 12: Uncheck "Cognito User Pool"

    Figure 12: Uncheck “Cognito User Pool”

At this point, the configuration is finished. When you open the Kibana URL, you should be redirected to AWS SSO and asked to authenticate using your Active Directory credentials. Keep in mind that if the AWS Elasticsearch domain was created inside VPC, it won’t be accessible from the Internet but only within VPC.

Managing multiple Amazon ES domains

In scenarios where different users need access to different Amazon ES domains, the solution would be as follows for each Amazon ES domain:

  1. Create one Active Directory Security Group per Amazon ES domain
  2. Create an Amazon Cognito user pool for each domain
  3. Add new applications to AWS SSO and grant permission to corresponding security groups
  4. Assign users to the appropriate security group

Deleting domains that use Amazon Cognito Authentication for Kibana

To prevent domains that use Amazon Cognito authentication for Kibana from becoming stuck in a configuration state of “Processing,” it’s important that you delete Amazon ES domains before deleting their associated Amazon Cognito user pools and identity pools.

Conclusion

I’ve outlined an approach to securing access to Kibana by integrating Amazon Cognito with Amazon SSO and AWS Directory Services. This allows you to narrow the scope of users who haves access to each Amazon Elasticsearch domain by configuring separate applications in AWS SSO for each of the domains.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Remek Hetman

Remek is a Senior Cloud Infrastructure Architect with Amazon Web Services Professional Services. He works with AWS financial enterprise customers providing technical guidance and assistance for Infrastructure, Security, DevOps, and Big Data to help them make the best use of AWS services. Outside of work, he enjoys spending time actively, and pursuing his passion – astronomy.

How to create and manage users within AWS Single Sign-On

Post Syndicated from Vijay Sharma original https://aws.amazon.com/blogs/security/how-to-create-and-manage-users-within-aws-sso/

AWS Single Sign-On (AWS SSO) is a cloud service that allows you to grant your users access to AWS resources, such as Amazon EC2 instances, across multiple AWS accounts. By default, AWS SSO now provides a directory that you can use to create users, organize them in groups, and set permissions across those groups. You can also grant the users that you create in AWS SSO permissions to applications such Salesforce, Box, and Office 365. AWS SSO and its directory are available at no additional cost to you.

A directory is a key building block that allows you to manage the users to whom you want to grant access to AWS resources and applications. AWS Identity and Access Management (IAM) provides a way to create users that can be used to access AWS resources within one AWS account. However, many businesses prefer an approach that enables users to sign in once with a single credential and access multiple AWS accounts and applications. You can now create your users centrally in AWS SSO and manage user access to all your AWS accounts and applications. Your users sign in to a user portal with a single set of credentials configured in AWS SSO, allowing them to access all of their assigned accounts and applications in a single place.

Note: If you manage your users in a Microsoft Active Directory (Microsoft AD) directory, AWS SSO already provides you with an option to connect to a Microsoft AD directory. By connecting your Microsoft AD directory once with AWS SSO, you can assign permissions for AWS accounts and applications directly to your users by easily looking up users and groups from your Microsoft AD directory. Your users can then use their existing Microsoft AD credentials to sign into the AWS SSO user portal and access their assigned accounts and applications in a single place. Customers who manage their users in an existing Lightweight Directory Access Protocol (LDAP) directory or through a cloud identity provider such as Microsoft Azure AD can continue to use IAM federation to enable their users’ access to AWS resources.

How to create users and groups in AWS SSO

You can create users in AWS SSO by configuring their email address and name. When you create a user, AWS SSO sends an email to the user by default so that they can set their own password. Your user will use their email address and a password they configure in AWS SSO to sign into the user portal and access all of their assigned accounts and applications in a single place.

You can also add the users that you create in AWS SSO to groups you create in AWS SSO. In addition, you can create permissions sets that define permitted actions on an AWS resource, and assign them to your users and groups. For example, you can grant the DevOps group permissions to your production AWS accounts. When you add users to the DevOps group, they get access to your production AWS accounts automatically.

In this post, I will show you how to create users and groups in AWS SSO, how to create permission sets, how to assign your groups and users to permission sets and AWS accounts, and how your users can sign into the AWS SSO user portal to access AWS accounts. To learn more about how to grant users that you create in AWS SSO permissions to business applications such as Office 365 and Salesforce, see Manage SSO to Your Applications.

Walk-through prerequisites

For this walk-through, I assume the following:

Overview

To illustrate how to add users in AWS SSO and how to grant permissions to multiple AWS accounts, imagine that you’re the IT manager for a company, Example.com, that wants to make it easy for its users to access resources in multiple AWS accounts. Example.com has five AWS accounts: a master account (called MasterAcct), two developer accounts (DevAccount1 and DevAccount2), and two production accounts (ProdAccount1 and ProdAccount2). Example.com uses AWS Organizations to manage these accounts and has already enabled AWS SSO.

Example.com has two developers, Martha and Richard, who need full access to Amazon EC2 and Amazon S3 in the developer accounts (DevAccount1 and DevAccount2) and read-only access to EC2 and S3 resources in the production accounts (ProdAccount1 and ProdAccount2).

The following diagram illustrates how you can grant Martha and Richard permissions to the developer and production accounts in four steps:

  1. Add users and groups in AWS SSO: Add users Martha and Richard in AWS SSO by configuring their names and email addresses. Add a group called Developers in AWS SSO and add Martha and Richard to the Developers group.
  2. Create permission sets: Create two permission sets. In the first permission set, include policies that give full access to Amazon EC2 and Amazon S3. In second permission set, include policies that give read-only access to Amazon EC2 and Amazon S3.
  3. Assign groups to accounts and permission sets: Assign the Developers group to your developer accounts and assign the permission set that gives full access to Amazon EC2 and Amazon S3. Assign the Developers group to your production accounts, too, and assign the permission set that gives read-only access to Amazon EC2 and Amazon S3. Martha and Richard now have full access to Amazon EC2 and Amazon S3 in the developer accounts and read-only access in the production accounts.
  4. Users sign into the User Portal to access accounts: Martha and Richard receive email from AWS to set their passwords with AWS SSO. Martha and Richard can now sign into the AWS SSO User Portal using their email addresses and the passwords they set with AWS SSO, allowing them to access their assigned AWS accounts.
Figure 1: Architecture diagram

Figure 1: Architecture diagram

Step 1: Add users and groups in AWS SSO

To add users in AWS SSO, navigate to the AWS SSO Console. Then, follow the steps below to add Martha as a user, to create a group called Developers, and to add Martha to the Developers group in AWS SSO.

  1. In the AWS SSO Dashboard, choose Manage your directory to navigate to the Directory tab.
    Figure 2: Navigating to the "Manage your directory" page

    Figure 2: Navigating to the “Manage your directory” page

  2. By default, AWS SSO provides you a directory that you can use to manage users and groups in AWS SSO. To add a user in AWS SSO, choose Add user. If you previously connected a Microsoft AD directory with AWS SSO, you can switch to using the directory that AWS SSO now provides by default by following the steps in Change Directory.
    Figure 3: Adding new users to your directory

    Figure 3: Adding new users to your directory

  3. On the Add User page, enter an email address, first name, and last name for the user, then create a display name. In this example, you’re adding “Martha Rivera” as a user. For the password, choose Send an email to the user with password instructions. This allows users to set their own passwords.

    Optionally, you can also set a mobile phone number and add additional user attributes.

    Figure 4: Adding user details

    Figure 4: Adding user details

  4. Next, you’re ready to add the user to groups. First, you need to create a group. Later, in Step 3, you can grant your group permissions to an AWS account so that any users added to the group will inherit the group’s permissions automatically. In this example, you will create a group called Developers and add Martha to the group. To do so, from the Add user to groups page, choose Create group.
    Figure 5: Creating a group

    Figure 5: Creating a new group

  5. In the Create group window, title your group by filling out the Group name field. For this example, enter Developers. Optionally, you can also enter a description of the group in the Description field. Choose Create to create the group.
    Figure 6: Adding a name and description to your new group

    Figure 6: Adding a name and description to your new group

  6. On the Add users to group page, check the box next to the group you just created, and then choose Add user. Following this process will allow you to add Martha to the Developers group.
    Figure 7: Adding a user to your group

    Figure 7: Adding a user to your new group

You’ve successfully created the user Martha and added her to the Developers group. You can repeat sub-steps 2, 3, and 6 above to create more users and add them to the group. This is the process you should follow to create the user Richard and add him to the Developers group.

Next, you’ll grant the Developers group permissions to AWS resources within multiple AWS accounts. To follow along, you’ll first need to create permission sets.

Step 2: Create permission sets

To grant user permissions to AWS resources, you must create permission sets. A permission set is a collection of administrator-defined policies that AWS SSO uses to determine a user’s permissions for any given AWS account. Permission sets can contain either AWS managed policies or custom policies that are stored in AWS SSO. Policies contain statements that represent individual access controls (allow or deny) for various tasks. This determines what tasks users can or cannot perform within the AWS account. To learn more about permission sets, see Permission Sets.

For this use case, you’ll create two permissions sets: 1) EC2AndS3FullAccess, which has AmazonEC2FullAccess and AmazonS3FullAccess managed policies attached and 2) EC2AndS3ReadAccess, which has AmazonEC2ReadOnlyAccess and AmazonS3ReadOnlyAccess managed policies attached. Later, in Step 3, you can assign groups to these permissions sets and AWS accounts, so that your users have access to these resources. To learn more about creating permission sets with different levels of access, see Create Permission Set.

Follow the steps below to create permission sets:

  1. Navigate to the AWS SSO Console and choose AWS accounts in the left-hand navigation menu.
  2. Switch to the Permission sets tab on the AWS Accounts page, and then choose Create permissions set.
    Figure 8: Creating a permission set

    Figure 8: Creating a permission set

  3. On the Create new permissions set page, choose Create a custom permission set. To learn more about choosing between an existing job function policy and a custom permission set, see Create Permission Set.
    Figure 9: Customizing a permission set

    Figure 9: Customizing a permission set

  4. Enter EC2AndS3FullAccess in the Name field and choose Attach AWS managed policies. Then choose AmazonEC2FullAccess and AmazonS3FullAccess. Choose Create to create the permission set.
    Figure 10: Attaching AWS managed policies to your permission set

    Figure 10: Attaching AWS managed policies to your permission set

You’ve successfully created a permission set. You can use the steps above to create another permission set, called EC2AndS3ReadAccess, by attaching the AmazonEC2ReadOnlyAccess and AmazonS3ReadOnlyAccess managed policies. Now you’re ready to assign your groups to accounts and permission sets.

Step 3: Assign groups to accounts and permission sets

In this step, you’ll assign your Developers group full access to Amazon EC2 and Amazon S3 in the developer accounts and read-only access to these resources in the production accounts. To do so, you’ll assign the Developers group to the EC2AndS3FullAccess permission set and to the two developer accounts (DevAccount1 and DevAccount2). Similarly, you’ll assign the Developers group to the EC2AndS3ReadAccess permission set and to the production AWS accounts (ProdAccount1 and ProdAccount2).

Follow the steps below to assign the Developers group to the EC2AndS3FullAccess permission set and developer accounts (DevAccount1 and DevAccount2). To learn more about how to manage access to your AWS accounts, see Manage SSO to Your AWS Accounts.

  1. Navigate to the AWS SSO Console and choose AWS Accounts in the left-hand navigation menu.
  2. Switch to the AWS organization tab and choose the accounts to which you want to assign your group. For this example, select accounts DevAccount1 and DevAccount2 from the list of AWS accounts. Next, choose Assign users.
    Figure 11: Assigning users to your accounts

    Figure 11: Assigning users to your accounts

  3. On the Select users and groups page, type the name of the group you want to add into the search box and choose Search. For this example, you will be looking for the group called Developers. Check the box next to the correct group and choose Next: Permission Sets.
    Figure 12: Setting permissions for the "Developers" group

    Figure 12: Setting permissions for the “Developers” group

  4. On the Select permissions sets page, select the permission sets that you want to assign to your group. For this use case, you’ll select the EC2AndS3FullAccess permission set. Then choose Finish.
    Figure 13: Choosing permission sets

    Figure 13: Choosing permission sets

You’ve successfully granted users in the Developers group access to accounts DevAccount1 and DevAccount2, with full access to Amazon EC2 and Amazon S3.

You can follow the same steps above to grant users in the Developers group access to accounts ProdAccount1 and ProdAccount2 with the permissions in the EC2AndS3ReadAccess permission set. This will grant the users in the Developers group read-only access to Amazon EC2 and Amazon S3 in the production accounts.

Figure 14: Notification of successful account configuration

Figure 14: Notification of successful account configuration

Step 4: Users sign into User Portal to access accounts

Your users can now sign into the AWS SSO User Portal to manage resources in their assigned AWS accounts. The user portal provides your users with single sign-on access to all their assigned accounts and business applications. From the user portal, your users can sign into multiple AWS accounts by choosing the AWS account icon in the portal and selecting the account that they want to access.

You can follow the steps below to see how Martha signs into the user portal to access her assigned AWS accounts.

  1. When you added Martha as a user in Step 1, you selected the option Send the user an email with password setup instructions. AWS SSO sent instructions to set a password to Martha at the email that you configured when creating the user. This is the email that Martha received:
    Figure 15: AWS SSO password configuration email

    Figure 15: AWS SSO password configuration email

  2. To set her password, Martha will select Accept invitation in the email that she received from AWS SSO. Selecting Accept invitation will take Martha to a page where she can set her password. After Martha sets her password, she can navigate to the User Portal.
    Figure 16: User Portal sign-in

    Figure 16: User Portal sign-in

  3. In the User Portal, Martha can select the AWS Account icon to view all the AWS accounts to which she has permissions.
    Figure 17: View of AWS Account icon from User Portal

    Figure 17: View of AWS Account icon from User Portal

  4. Martha can now see the developer and production accounts that you granted her permissions to in previous steps. For each account, she can also see the list of roles that she can assume within the account. For example, for DevAccount1 and DevAccount2, Martha can assume the EC2AndS3FullAccess role that gives her full access to manage Amazon EC2 and Amazon S3. Similarly, for ProdAccount1 and ProdAccount2, Martha can assume the EC2AndS3ReadAccess role that gives her read-only access to Amazon EC2 and Amazon S3. Martha can select accounts and choose Management Console next to the role she wants to assume, letting her sign into the AWS Management Console to manage AWS resources. To switch to a different account, Martha can navigate to the User Portal and select a different account. From the User Portal, Martha can also get temporary security credentials for short-term access to resources in an AWS account using AWS Command Line Interface (CLI). To learn more, see How to Get Credentials of an IAM Role for Use with CLI Access to an AWS Account.
    Figure 17: Switching accounts from the User Portal

    Figure 18: Switching accounts from the User Portal

  5. Martha bookmarks the user portal URL in her browser so that she can quickly access the user portal the next time she wants to access AWS accounts.

Summary

By default, AWS now provides you with a directory that you can use to manage users and groups within AWS SSO and to grant user permissions to resources in multiple AWS accounts and business applications. In this blog post, I showed you how to manage users and groups within AWS SSO and grant them permissions to multiple AWS accounts. I also showed how your users sign into the user portal to access their assigned AWS accounts.

If you have feedback or questions about AWS SSO, start a new thread on the AWS SSO forum or contact AWS Support. If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Vijay Sharma

Vijay is a Senior Product Manager with AWS Identity.

How to seamlessly domain join Amazon EC2 instances to a single AWS Managed Microsoft AD Directory from multiple accounts and VPCs

Post Syndicated from Peter Pereira original https://aws.amazon.com/blogs/security/how-to-domain-join-amazon-ec2-instances-aws-managed-microsoft-ad-directory-multiple-accounts-vpcs/

You can now share a single AWS Directory Service for Microsoft Active Directory (also known as an AWS Managed Microsoft AD) with multiple AWS accounts within an AWS Region. This capability makes it easier and more cost-effective for you to manage directory-aware workloads from a single directory across accounts and Amazon Virtual Private Clouds (Amazon VPC). Instead of needing to manually domain join your Amazon Elastic Compute Cloud instances (EC2 instances) or create one directory per account and VPC, you can use your directory from any AWS account and from any VPC within an AWS Region.

In this post, I show you how to launch two EC2 instances, each in a separate Amazon VPC within the same AWS account (the directory consumer account), and then seamlessly domain-join both instances to a directory in another account (the directory owner account). You’ll accomplish this in four steps:

  1. Create an AWS Managed Microsoft AD directory.
  2. Establish networking connectivity between VPCs.
  3. Share the directory with the directory consumer account.
  4. Launch Amazon EC2 instances and seamlessly domain join to the directory.

Solution architecture

The following diagram shows the steps you’ll follow to use a single AWS Managed Microsoft AD in multiple accounts. Note that when you complete Step 3, AWS Microsoft Managed AD will create a shared directory in the directory consumer account. The shared directory contains the metadata that enables the EC2 seamless domain join to locate the directory in the directory owner account. Note that there are additional charges for directory sharing.
 

Figure 1: Architecture diagram showing directory sharing

Figure 1: Architecture diagram showing directory sharing

Step 1: Create an AWS Microsoft AD directory

First, follow the steps to create an AWS Microsoft AD directory in your directory owner AWS Account and Amazon VPC. In the examples I use throughout this post, my domain name is example.com, but remember to replace this with your own domain name.

When you create your directory, you’ll have the option in Step 3: Choose VPC and subnets to choose the subnets in which to deploy your domain controllers. AWS Microsoft AD ensures that you select subnets from different Availability Zones. In my example, I have no subnet preference, so I choose No Preference from the Subnets drop-down list.
 

Figure 2: Selecting Subnet preference

Figure 2: Selecting Subnet preference

Select Next to review your configuration, and then select Create directory. It can take 20-45 minutes for the directory creation process to finish. While AWS Managed Microsoft AD creates the directory, you can move on to the next step.

Step 2: Establish networking connectivity between VPCs

To domain join your Amazon EC2 instances to your directory, you need to establish networking connectivity between the VPCs. There are multiple methods of establishing networking connectivity between two VPCs. In this post, I’ll show you how to use Amazon VPC peering by performing the following steps:

  1. Create one VPC peering connection between the directory owner VPC-0 and directory consumer VPC-1, then create another connection between the directory owner VPC-0 and directory consumer VPC-2. For reference, here are my own VPC details:

    VPCCIDR block
    Directory owner VPC-0172.31.0.0/16
    Directory consumer VPC-110.0.0.0/16
    Directory consumer VPC-210.100.0.0/16
  2. Enable traffic routing between the peered VPCs by adding a route to your VPC route table that points to the VPC peering connection to route traffic to the other VPC in the peering connection. I’ve configured my directory owner VPC-0 route table by adding the following VPC peering connections:

    DestinationTarget
    172.31.0.0/16Local
    10.0.0.0/16pcx-0
    10.100.0.0/16pcx-1
  3. Configure each of the directory consumer VPC route tables by adding the peering connection with the directory owner VPC-0. If you want, you can also create and attach an Internet Gateway to your directory consumer VPCs. This enables the instances in the directory consumer VPCs to communicate with the AWS System Manager (SSM) agent that performs the domain join. Here are my directory consumer VPC route table configurations:
    VPC-1 route table:

    DestinationTarget
    10.0.0.0/16Local
    172.31.0.0/16pcx-0
    0.0.0.0/0igw-0

    VPC-2 route table:

    DestinationTarget
    10.100.10.10/16Local
    172.31.0.0/16pcx-1
    0.0.0.0/0igw-1
  4. Next, configure your directory consumer VPCs’ security group to enable outbound traffic by adding the Active Directory protocols and ports to the outbound rules table.

Step 3: Share the directory with the directory consumer account

Now that your networking is in place, you must make your directory visible to the directory consumer account. You can accomplish this by sharing your directory with the directory consumer account. Directory sharing works at the account level, which also makes the directory visible to all VPCs within the directory consumer account.

AWS Managed Microsoft AD provides two directory sharing methods: AWS Organizations and Handshake:

  • AWS Organizations makes it easier to share the directory within your organization because you can browse and validate the directory consumer accounts. To use this option, your organization must have all features enabled, and your directory must be in the organization master account. This method of sharing simplifies your setup because it doesn’t require the directory consumer accounts to accept your directory sharing request.
  • Handshake enables directory sharing when you aren’t using AWS Organizations. The handshake method requires the directory consumer account to accept the directory sharing request.

In my example, I’ll walk you through the steps to use AWS Organizations to share a directory:

  1. Open the AWS Management Console, then select Directory Service and select the directory you want to share (in my case, example.com). Select the Actions button, and then the Share directory option.
  2. Select Share this directory with AWS accounts inside your organization, then choose the Enable Access to AWS Organizations button. This allows your AWS account to list all accounts in your Organizations in the AWS Directory Service console.
  3. Select your directory consumer account (in my example, Consumer Example) from the Organization accounts browser, then select the Add button.
     
    Figure 3: Select the account and then select "Add"

    Figure 3: Select the account and then select “Add”

  4. You should now be able to see your directory consumer account in the Selected Accounts table. Select the Share button to share your directory with that account:
     
    Figure 4: Selected accounts and the "Share" button

    Figure 4: Selected accounts and the “Share” button

    To share your directory with multiple directory consumer accounts, you can repeat steps 3 and 4 for each account.

    When you’re finished sharing, AWS Managed Microsoft AD will create a shared directory in each directory consumer account. The shared directory contains the metadata to locate the directory in the directory owner account. Each shared directory has a unique identifier (Shared directory ID). After you’ve shared your directory, you can find your shared directory IDs in the Scale & Share tab in the AWS Directory Service console. In my example, AWS Managed Microsoft AD created the shared directory ID d-90673f8d56 in the Consumer Example account:
     

    Figure 5: Confirmation notification about successful sharing

    Figure 5: Confirmation notification about successful sharing

    You can see the shared directory details in your directory consumer account by opening the AWS Management Console, choosing Directory Service, selecting the Directories shared with me option in the left menu, and then choosing the appropriate Shared directory ID link:
     

    Figure 6: Shared account details example

    Figure 6: Shared account details example

Step 4: Launch Amazon EC2 instances and seamlessly domain join to the directory

Now that you’ve established the networking between your VPCs and shared the directory, you’re ready to launch EC2 instances in your directory consumer VPCs and seamlessly domain join to your directory. In my example, I use the Amazon EC2 console but you can also use AWS Systems Manager.

Follow the prompts of the Amazon EC2 launch instance wizard to select a Windows server instance type. When you reach Step 3: Configure Instance Details, select the shared directory that locates your domain in the directory owner account. (I’ve chosen d-926726739b, which will locate the domain example.com.) Then select the textEC2DomainJoin IAM role. Choose the Review and Launch button, and then the Launch button on the following screen.
 

Figure 7: The "Review and Launch" button

Figure 7: The “Review and Launch” button

Now that you’ve joined your Amazon EC2 instance to the domain, you can log into your instance using a Remote Desktop Protocol (RDP) client with the credentials from your AD user account.

You can then install and run AD-aware workloads such as Microsoft SharePoint on the instance, and the application will use your directory. To launch your second instance, just repeat Step 4: Launch Amazon EC2 instances and seamlessly domain join to the directory, selecting the VPC-2 instead of VPC-1. This makes it easier and quicker for you to deploy and manage EC2 instances using the credentials from a single AWS Managed Microsoft AD directory across multiple accounts and VPCs.

Summary

In this blog post, I demonstrate how to seamlessly domain join Amazon EC2 instances from multiple accounts and VPCs to a single AWS Managed Microsoft AD directory. By sharing the directory with multiple accounts, you can simplify the management and deployment of directory-aware workloads on Amazon EC2 instances. This eliminates the need to manually domain join the instances or create one directory per account and VPC. In addition, with AWS Managed Microsoft AD and AWS Systems Manager, you can automate your Amazon EC2 deployments and seamlessly domain join to your single directory from any account and VPC without the need to write PowerShell code using AWS Command Line Interface or application programming interfaces.

To learn more about AWS Directory Service, see the AWS Directory Service home page. If you have questions, post them on the Directory Service forum.

Want more AWS Security news? Follow us on Twitter.

Peter Pereira

Peter is a Senior Technical Product Manager working on AWS Directory Service. He enjoys the customer obsession culture at Amazon because it relates with his previous experience of managing IT in multiple industries, including engineering, manufacturing, and education. Outside work he is the “Dad Master Grill” and loves to spend time with his family. He holds an MBA from BYU and an undergraduate degree from the University of State of Santa Catarina.

How to migrate your on-premises domain to AWS Managed Microsoft AD using ADMT

Post Syndicated from Danny Jenkins original https://aws.amazon.com/blogs/security/how-to-migrate-your-on-premises-domain-to-aws-managed-microsoft-ad-using-admt/

Customers often ask me how to migrate their on-premises Active Directory (AD) domain to AWS so they can be free of the operational management of their AD infrastructure. Frequently they are unsure how to make the migration easy. A common approach using the CSVDE utility doesn’t migrate attributes such as user passwords. This makes migration difficult and necessitates manual effort for a large part of the migration that can cause operational and security challenges when migrating to a new directory. So what’s changed?

You can now use the Active Directory Migration Toolkit (ADMT) along with the Password Export Service (PES) to migrate your self-managed AD to AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD. This enables you to migrate AD objects and encrypted passwords for your users more easily.

AWS Managed Microsoft AD, a managed service built on actual Microsoft Active Directory. AWS provides operational management of the domain controllers and you use standard AD tools to administer users, groups, and computers. AWS Managed Microsoft AD enables you to take advantage of built-in Active Directory features, such as Group Policy, trusts, and single sign-on and helps make it easy to migrate AD-dependent workloads into the AWS Cloud. With AWS Managed Microsoft AD, you can join Amazon EC2 and Amazon RDS for SQL Server instances to a domain, and use AWS Enterprise IT applications, such as Amazon WorkSpaces, and AWS SSO with Active Directory users and groups.

In this blog, I will show you how to migrate your existing AD objects to AWS Managed Microsoft AD. The source of the objects can be your self-managed AD running on EC2, on-premises, co-located, or even another cloud provider. I will show how to use ADMT and PES to migrate objects including users (and their passwords), groups, and computers.

The blog post assumes you are familiar with AD and how to use the Remote Desktop Protocol client to sign and use EC2 Windows instances.

Background

In this post I will migrate user and computer objects, as well as passwords, to a new AWS Managed Microsoft AD directory. The source will be an on-premises domain.

This example migration will be for a fairly simple use case. Large customers with complex source domains or forests may have more complex processes involved to map users, groups, and computers to the single OU structure of AWS Managed Microsoft AD. For example, you may want to migrate an OU at a time. Customers with single domain forests may be able to migrate in fewer steps. Similarly, the options you might select in ADMT will vary based on what you are trying to accomplish.

To perform the migration, I will use the admin user account from my AWS Managed Microsoft AD. AWS creates the admin user account and delegates administrative permissions to the account for an organizational unit (OU) in the AWS Managed Microsoft AD domain. This account has most of the permissions required to manage your domain, and all the permissions required to complete this migration.

In this example, I have a Source domain called source.local that’s running in a 10.0.0.0/16 network range, and I want to migrate my users, groups, and computers to a destination domain in AWS Managed Microsoft AD called destination.local that’s running in a network range of 192.168.0.0/16.

To migrate users from source.local to destination.local, I need a migration computer that I join to the destination.local domain on which I will run ADMT. I also use this machine to perform administrative tasks on my AWS Managed Microsoft AD. As a prerequisite for ADMT, I must install Microsoft SQL Express 2016 on the migration computer. I also need an administrative account that has permissions in both the source and destination AD domains. To do this, I will use an AD trust and will add my AWS Managed Microsoft AD admin account from destination.local to my source.local domain. Next I will install ADMT on the migration computer, and will run the PES on one of the source.local domain controllers. Finally, I will migrate the users and computers.

For this example, I have a handful of users, groups, and computers, shown in the source domain in these screen shots, that I will migrate:
 

Figure 1: Example source users

Figure 1: Example source users

 

Figure 2: Example client computers

Figure 2: Example client computers

In the remainder of this blog, I will show you how to do the migration in 5 main steps:

  1. Prepare the forests, migration computer, and administrative account.
  2. Install SQL Express and ADMT on the migration computer.
  3. Configure ADMT and PES.
  4. Migrate users and groups.
  5. Migrate computers.

Step 1: Prepare the forests, migration computer, and administrative account

To migrate users and passwords from the source domain to AWS Managed Microsoft AD, you must have a 2-way forest trust. The trust from the source domain to AWS Managed Microsoft AD enables you to add the admin account from the AWS Managed Microsoft AD to the source domain. This is necessary so you can grant the AWS Managed Microsoft AD admin account permissions in your source AD directory so it can read the attributes to migrate. I’ve already created a two-way forest trust between these domains. You should do the same by following this guide. Once your trust has been created, it should show up in the AWS console as Verified.

The ADMT tool should be installed on a computer that isn’t the domain controller in the destination domain destination.local. For this, I will launch an EC2 instance in the same VPC as my domain controller and I will add it to the destination.local domain using the EC2 seamless domain join feature. This will act as my ADMT transfer machine.

  1. Launch a Microsoft Windows 2012 R2 instance.
  2. Complete a domain join to the target domain destination.local. You can complete this manually, or alternatively you can use AWS Systems Manager to complete a seamless domain join as covered here.
  3. Sign into the instance using RDP and use Active Directory Users and Computers (ADUC) to add the AWS Managed Microsoft AD admin user from the destination.local domain to the source.local domain’s built-in administrators group (you will not be able to add the admin user as a domain admin). For information on how to set up this instance to use ADUC, please see this documentation.
     
    Figure 3: the "Administrator's Propoerties" dialog box

    Figure 3: the “Administrator’s Propoerties” dialog box

Step 2: Install SQL Express and ADMT on the migration computer

Next, I need to install SQL Express and ADMT on the migration computer by following these steps.

  1. Install Microsoft SQL Express 2016 on this computer with a default install.
  2. Download ADMT version 3.2 from the Microsoft website.
  3. Once it has downloaded, run the installer and, when setting the tool up, on the Database Selection page of the wizard, for Database (Server\Instance), type the local instance of Microsoft SQL Express we previously installed to work with ADMT.
     
    Figure 4: Specify the "Database (Server\Instance)"

    Figure 4: Specify the “Database (Server\Instance)”

  4. On the Database Import page of the wizard, select No, do not import data from an existing database (Default).
     
    Figure 5: The "Database Import" dialog box

    Figure 5: The “Database Import” dialog box

  5. Complete the rest of the installation using all of the default options.

Step 3: Configure ADMT and PES

I’ll use PES to take care of encrypted password synchronization, but before I configure that, I need to create an encryption key that will be used during this process to encrypt the password migration.

  1. On the ADMT transfer machine, open an elevated Command Prompt and use the following format to create the encryption key.
     
    admt key /option:create /sourcedomain:<SourceDomain> /keyfile:<KeyFilePath> /keypassword:{<password>|*}
     
    Here’s an example:
     
    admt key /option:create /sourcedomain:source.local /keyfile:c:\ /keypassword:password123

    Note: If you get an error stating that the command is not found, close and reopen Command Prompt to refresh the path locations to the ADMT executable, and then try again.

  2. Now, I can download and start the install for the Password Export Server.
  3. Start the install and, in the ADMT Password Migration DLL Setup window, browse to the encryption file you created in the previous step.
  4. When prompted, enter the password used in the ADMT encryption command.
  5. Run PES using the local system account. Note that this will prompt a restart of the domain controller you’re installing PES on.
  6. Once the domain controller has rebooted, open services.msc and start the Password Export Server Service, which is currently set to Manual. You might choose to set this to automatic if it’s likely your DC will be rebooted again before the end of your migration.
     
    Figure 6: Start the Password Export Server Service

    Figure 6: Start the Password Export Server Service

  7. You can now open the Active Directory Migration Tool: Control Panel > System and Security > Administrative Tools > Active Directory Migration Tool.
  8. Right-click Active Directory Migration Tool to see the migration options:
     
    Figure 7: List of migration options

    Figure 7: List of migration options

Step 4: Migrate users and groups

  1. In the Domain Selection page, select or type the Source and Target domains, and then select Next.
  2. On the User Selection page, select the users to migrate. You can use an include file if you have a large domain. Select Next.
  3. On the Organizational Unit Selection page, select the destination OU that you want to migrate your users across to, and then select Next. AWS Managed Microsoft AD gives you a managed OU where you can create your OU tree structure.

    In this example, I will place them in Users OU:
     
    LDAP://destination.local/OU=Users,OU=destination,DC=destination,DC=local
     

  4. On the Password Options page, select Migrate passwords, and then select Next. This will contact PES running on the source domain controller.
  5. On the Account Transitions Page, decide how to handle the migration of user objects. In this example, I’m going to replicate the state from the source domain. Migrating SID history is beneficial when you’re doing long, staged migrations where users may need to access resources in the source and destination domain before migration is complete. At this time, AWS Managed Microsoft AD doesn’t support migrating user SIDs. I select Target same as source, and then select Next. Again, what you choose to do might be different.
     
    Figure 8: The "Account Transition Options" dialog

    Figure 8: The “Account Transition Options” dialog

  6. Now, let’s customize the transfer. The following screen shot shows the commonly selected options on the User Options page of the User Account Migration Wizard:
     
    Figure 9: Common user options, including "Update user rights," "Migrate associated user groups," and "Fix users' group memberships

    Figure 9: Common user options

It’s likely you’ll have more than one migration pass, so choosing how you handle existing objects is important. This will be a single run for us, but the default behavior is to not migrate if the object already exists (see the image of the Conflict Management page below). If you’re running multiple passes, you’ll will want to look at options that involve merging conflicting objects. The method you select will depend on your use case. If you don’t know where to start, read this article.
 

Figure 10: The "Conflict Management" dialog box with the "Do not migrate source object..." option selected

Figure 10: The “Conflict Management” dialog box

In my example, you can see my 3 users, and any groups they were members of, have been migrated.
 

Figure 11: The "Migration Progress" window

Figure 11: The “Migration Progress” window

We can verify this by checking the users exist in our destination.local domain:
 

Figure 12: Checking the users exist in the destination.local domain

Figure 12: Checking the users exist in the destination.local domain

Step 5: Migrate computers

Now, I’ll move on to computer objects.

  1. Open the Active Directory Migration Tool: Control Panel > System and Security > Administrative Tools > Active Directory Migration Tool.
  2. Right-click Active Directory Migration Tool and select Computer Migration Wizard.
  3. Select the computers you want to migrate to the new domain. I’ll select four computers for migration.
     
    Figure 13: Four computers that will be migrated

    Figure 13: Four computers that will be migrated

  4. On the Translate Objects page, select which access controls you want to reapply during the migration, and then select Next.
     
    Figure 14: The "Translate Objects" dialog box

    Figure 14: The “Translate Objects” dialog box

    The migration process will quickly show completed, but we need to make sure the entire process worked.

  5. To verify the migration was successful, select Close, and the migration tool will open a new window that has a link to the migration log. Check the log file to see that it has started the process of migrating these four computers:


    2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-56SQFFFJCR1.source.local

    2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-IG2V2NAN1MU.source.local

    2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-QKQEJHUEV27.source.local

    2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-SE98KE4Q9CR.source.local

    If the admin user doesn’t have access to the C$ or admin$ share on the computer in the source domain share, then then installation of the agent will fail as shown here:


    2017-08-11 04:09:29 ERR2:7006 Failed to install agent on \\WIN-IG2V2NAN1MU.source.local, rc=5 Access is denied.

    Once the agent is installed, it will perform a domain disjoin from source.local and perform a join to desintation.local. The log file will update when this has been successful:


    2017-08-11 04:13:29 Post-check passed on the computer ‘WIN-SE98KE4Q9CR.source.local’. The new computer name is ‘WIN-SE98KE4Q9CR.destination.local’.

    2017-08-11 04:13:29 Post-check passed on the computer ‘WIN-QKQEJHUEV27.source.local’. The new computer name is ‘WIN-QKQEJHUEV27.destination.local’.

    2017-08-11 04:13:29 Post-check passed on the computer ‘WIN-56SQFFFJCR1.source.local’. The new computer name is ‘WIN-56SQFFFJCR1.destination.local’.

    You can then view the new computer objects in the destination domain.

  6. Log in to one of the old source.localsource.local computers and, by looking at the computer’s System Properties, confirm that the computer is now a member of the new destination.local domain.
     
    Figure 15: Confirm the computer is member of the destination.local domain

    Figure 15: Confirm the computer is member of the destination.local domain

Summary

In this simple example I showed how to migrate users and their passwords, groups, and computer objects from an on premises deployment of Active Directory, to our fully AWS Managed Microsoft AD. We created a management instance on which we ran SQL Express and ADMT, we created a forest trust to grant permissions for an account to use ADT to move users, I configured ADMT and the PES tool, and then stepped through the migration using ADMT.

The ADMT tool gives us a great way to migrate to our managed Microsoft AD service that allows powerful customization of the migration, and it does so in a more secure way through encrypted password synchronization. You may need to do additional investigation and planning if the complexity of your environment requires a different approach with some of these steps.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Directory service forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

How to centralize DNS management in a multi-account environment

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-centralize-dns-management-in-a-multi-account-environment/

In a multi-account environment where you require connectivity between accounts, and perhaps connectivity between cloud and on-premises workloads, the demand for a robust Domain Name Service (DNS) that’s capable of name resolution across all connected environments will be high.

The most common solution is to implement local DNS in each account and use conditional forwarders for DNS resolutions outside of this account. While this solution might be efficient for a single-account environment, it becomes complex in a multi-account environment.

In this post, I will provide a solution to implement central DNS for multiple accounts. This solution reduces the number of DNS servers and forwarders needed to implement cross-account domain resolution. I will show you how to configure this solution in four steps:

  1. Set up your Central DNS account.
  2. Set up each participating account.
  3. Create Route53 associations.
  4. Configure on-premises DNS (if applicable).

Solution overview

In this solution, you use AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) as a DNS service in a dedicated account in a Virtual Private Cloud (DNS-VPC).

The DNS service included in AWS Managed Microsoft AD uses conditional forwarders to forward domain resolution to either Amazon Route 53 (for domains in the awscloud.com zone) or to on-premises DNS servers (for domains in the example.com zone). You’ll use AWS Managed Microsoft AD as the primary DNS server for other application accounts in the multi-account environment (participating accounts).

A participating account is any application account that hosts a VPC and uses the centralized AWS Managed Microsoft AD as the primary DNS server for that VPC. Each participating account has a private, hosted zone with a unique zone name to represent this account (for example, business_unit.awscloud.com).

You associate the DNS-VPC with the unique hosted zone in each of the participating accounts, this allows AWS Managed Microsoft AD to use Route 53 to resolve all registered domains in private, hosted zones in participating accounts.

The following diagram shows how the various services work together:
 

Diagram showing the relationship between all the various services

Figure 1: Diagram showing the relationship between all the various services

 

In this diagram, all VPCs in participating accounts use Dynamic Host Configuration Protocol (DHCP) option sets. The option sets configure EC2 instances to use the centralized AWS Managed Microsoft AD in DNS-VPC as their default DNS Server. You also configure AWS Managed Microsoft AD to use conditional forwarders to send domain queries to Route53 or on-premises DNS servers based on query zone. For domain resolution across accounts to work, we associate DNS-VPC with each hosted zone in participating accounts.

If, for example, server.pa1.awscloud.com needs to resolve addresses in the pa3.awscloud.com domain, the sequence shown in the following diagram happens:
 

How domain resolution across accounts works

Figure 2: How domain resolution across accounts works

 

  • 1.1: server.pa1.awscloud.com sends domain name lookup to default DNS server for the name server.pa3.awscloud.com. The request is forwarded to the DNS server defined in the DHCP option set (AWS Managed Microsoft AD in DNS-VPC).
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.

Similarly, if server.example.com needs to resolve server.pa3.awscloud.com, the following happens:

  • 2.1: server.example.com sends domain name lookup to on-premise DNS server for the name server.pa3.awscloud.com.
  • 2.2: on-premise DNS server using conditional forwarder forwards domain lookup to AWS Managed Microsoft AD in DNS-VPC.
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.

Step 1: Set up a centralized DNS account

In previous AWS Security Blog posts, Drew Dennis covered a couple of options for establishing DNS resolution between on-premises networks and Amazon VPC. In this post, he showed how you can use AWS Managed Microsoft AD (provisioned with AWS Directory Service) to provide DNS resolution with forwarding capabilities.

To set up a centralized DNS account, you can follow the same steps in Drew’s post to create AWS Managed Microsoft AD and configure the forwarders to send DNS queries for awscloud.com to default, VPC-provided DNS and to forward example.com queries to the on-premise DNS server.

Here are a few considerations while setting up central DNS:

  • The VPC that hosts AWS Managed Microsoft AD (DNS-VPC) will be associated with all private hosted zones in participating accounts.
  • To be able to resolve domain names across AWS and on-premises, connectivity through Direct Connect or VPN must be in place.

Step 2: Set up participating accounts

The steps I suggest in this section should be applied individually in each application account that’s participating in central DNS resolution.

  1. Create the VPC(s) that will host your resources in participating account.
  2. Create VPC Peering between local VPC(s) in each participating account and DNS-VPC.
  3. Create a private hosted zone in Route 53. Hosted zone domain names must be unique across all accounts. In the diagram above, we used pa1.awscloud.com / pa2.awscloud.com / pa3.awscloud.com. You could also use a combination of environment and business unit: for example, you could use pa1.dev.awscloud.com to achieve uniqueness.
  4. Associate VPC(s) in each participating account with the local private hosted zone.

The next step is to change the default DNS servers on each VPC using DHCP option set:

  1. Follow these steps to create a new DHCP option set. Make sure in the DNS Servers to put the private IP addresses of the two AWS Managed Microsoft AD servers that were created in DNS-VPC:
     
    The "Create DHCP options set" dialog box

    Figure 3: The “Create DHCP options set” dialog box

     

  2. Follow these steps to assign the DHCP option set to your VPC(s) in participating account.

Step 3: Associate DNS-VPC with private hosted zones in each participating account

The next steps will associate DNS-VPC with the private, hosted zone in each participating account. This allows instances in DNS-VPC to resolve domain records created in these hosted zones. If you need them, here are more details on associating a private, hosted zone with VPC on a different account.

  1. In each participating account, create the authorization using the private hosted zone ID from the previous step, the region, and the VPC ID that you want to associate (DNS-VPC).
     
    aws route53 create-vpc-association-authorization –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     
  2. In the centralized DNS account, associate DNS-VPC with the hosted zone in each participating account.
     
    aws route53 associate-vpc-with-hosted-zone –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     

After completing these steps, AWS Managed Microsoft AD in the centralized DNS account should be able to resolve domain records in the private, hosted zone in each participating account.

Step 4: Setting up on-premises DNS servers

This step is necessary if you would like to resolve AWS private domains from on-premises servers and this task comes down to configuring forwarders on-premise to forward DNS queries to AWS Managed Microsoft AD in DNS-VPC for all domains in the awscloud.com zone.

The steps to implement conditional forwarders vary by DNS product. Follow your product’s documentation to complete this configuration.

Summary

I introduced a simplified solution to implement central DNS resolution in a multi-account environment that could be also extended to support DNS resolution between on-premise resources and AWS. This can help reduce operations effort and the number of resources needed to implement cross-account domain resolution.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Directory Service forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

AWS Achieves Spain’s ENS High Certification Across 29 Services

Post Syndicated from Oliver Bell original https://aws.amazon.com/blogs/security/aws-achieves-spains-ens-high-certification-across-29-services/

AWS has achieved Spain’s Esquema Nacional de Seguridad (ENS) High certification across 29 services. To successfully achieve the ENS High Standard, BDO España conducted an independent audit and attested that AWS meets confidentiality, integrity, and availability standards. This provides the assurance needed by Spanish Public Sector organizations wanting to build secure applications and services on AWS.

The National Security Framework, regulated under Royal Decree 3/2010, was developed through close collaboration between ENAC (Entidad Nacional de Acreditación), the Ministry of Finance and Public Administration and the CCN (National Cryptologic Centre), and other administrative bodies.

The following AWS Services are ENS High accredited across our Dublin and Frankfurt Regions:

  • Amazon API Gateway
  • Amazon DynamoDB
  • Amazon Elastic Container Service
  • Amazon Elastic Block Store
  • Amazon Elastic Compute Cloud
  • Amazon Elastic File System
  • Amazon Elastic MapReduce
  • Amazon ElastiCache
  • Amazon Glacier
  • Amazon Redshift
  • Amazon Relational Database Service
  • Amazon Simple Queue Service
  • Amazon Simple Storage Service
  • Amazon Simple Workflow Service
  • Amazon Virtual Private Cloud
  • Amazon WorkSpaces
  • AWS CloudFormation
  • AWS CloudTrail
  • AWS Config
  • AWS Database Migration Service
  • AWS Direct Connect
  • AWS Directory Service
  • AWS Elastic Beanstalk
  • AWS Key Management Service
  • AWS Lambda
  • AWS Snowball
  • AWS Storage Gateway
  • Elastic Load Balancing
  • VM Import/Export

How to Delegate Administration of Your AWS Managed Microsoft AD Directory to Your On-Premises Active Directory Users

Post Syndicated from Vijay Sharma original https://aws.amazon.com/blogs/security/how-to-delegate-administration-of-your-aws-managed-microsoft-ad-directory-to-your-on-premises-active-directory-users/

You can now enable your on-premises users administer your AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD. Using an Active Directory (AD) trust and the new AWS delegated AD security groups, you can grant administrative permissions to your on-premises users by managing group membership in your on-premises AD directory. This simplifies how you manage who can perform administration. It also makes it easier for your administrators because they can sign in to their existing workstation with their on-premises AD credential to administer your AWS Managed Microsoft AD.

AWS created new domain local AD security groups (AWS delegated groups) in your AWS Managed Microsoft AD directory. Each AWS delegated group has unique AD administrative permissions. Users that are members in the new AWS delegated groups get permissions to perform administrative tasks, such as add users, configure fine-grained password policies and enable Microsoft enterprise Certificate Authority. Because the AWS delegated groups are domain local in scope, you can use them through an AD Trust to your on-premises AD. This eliminates the requirement to create and use separate identities to administer your AWS Managed Microsoft AD. Instead, by adding selected on-premises users to desired AWS delegated groups, you can grant your administrators some or all of the permissions. You can simplify this even further by adding on-premises AD security groups to the AWS delegated groups. This enables you to add and remove users from your on-premises AD security group so that they can manage administrative permissions in your AWS Managed Microsoft AD.

In this blog post, I will show you how to delegate permissions to your on-premises users to perform an administrative task–configuring fine-grained password policies–in your AWS Managed Microsoft AD directory. You can follow the steps in this post to delegate other administrative permissions, such as configuring group Managed Service Accounts and Kerberos constrained delegation, to your on-premises users.

Background

Until now, AWS Managed Microsoft AD delegated administrative permissions for your directory by creating AD security groups in your Organization Unit (OU) and authorizing these AWS delegated groups for common administrative activities. The admin user in your directory created user accounts within your OU, and granted these users permissions to administer your directory by adding them to one or more of these AWS delegated groups.

However, if you used your AWS Managed Microsoft AD with a trust to an on-premises AD forest, you couldn’t add users from your on-premises directory to these AWS delegated groups. This is because AWS created the AWS delegated groups with global scope, which restricts adding users from another forest. This necessitated that you create different user accounts in AWS Managed Microsoft AD for the purpose of administration. As a result, AD administrators typically had to remember additional credentials for AWS Managed Microsoft AD.

To address this, AWS created new AWS delegated groups with domain local scope in a separate OU called AWS Delegated Groups. These new AWS delegated groups with domain local scope are more flexible and permit adding users and groups from other domains and forests. This allows your admin user to delegate your on-premises users and groups administrative permissions to your AWS Managed Microsoft AD directory.

Note: If you already have an existing AWS Managed Microsoft AD directory containing the original AWS delegated groups with global scope, AWS preserved the original AWS delegated groups in the event you are currently using them with identities in AWS Managed Microsoft AD. AWS recommends that you transition to use the new AWS delegated groups with domain local scope. All newly created AWS Managed Microsoft AD directories have the new AWS delegated groups with domain local scope only.

Now, I will show you the steps to delegate administrative permissions to your on-premises users and groups to configure fine-grained password policies in your AWS Managed Microsoft AD directory.

Prerequisites

For this post, I assume you are familiar with AD security groups and how security group scope rules work. I also assume you are familiar with AD trusts.

The instructions in this blog post require you to have the following components running:

Solution overview

I will now show you how to manage which on-premises users have delegated permissions to administer your directory by efficiently using on-premises AD security groups to manage these permissions. I will do this by:

  1. Adding on-premises groups to an AWS delegated group. In this step, you sign in to management instance connected to AWS Managed Microsoft AD directory as admin user and add on-premises groups to AWS delegated groups.
  2. Administer your AWS Managed Microsoft AD directory as on-premises user. In this step, you sign in to a workstation connected to your on-premises AD using your on-premises credentials and administer your AWS Managed Microsoft AD directory.

For the purpose of this blog, I already have an on-premises AD directory (in this case, on-premises.com). I also created an AWS Managed Microsoft AD directory (in this case, corp.example.com) that I use with Amazon RDS for SQL Server. To enable Integrated Windows Authentication to my on-premises.com domain, I established a one-way outgoing trust from my AWS Managed Microsoft AD directory to my on-premises AD directory. To administer my AWS Managed Microsoft AD, I created an Amazon EC2 for Windows Server instance (in this case, Cloud Management). I also have an on-premises workstation (in this case, On-premises Management), that is connected to my on-premises AD directory.

The following diagram represents the relationships between the on-premises AD and the AWS Managed Microsoft AD directory.

The left side represents the AWS Cloud containing AWS Managed Microsoft AD directory. I connected the directory to the on-premises AD directory via a 1-way forest trust relationship. When AWS created my AWS Managed Microsoft AD directory, AWS created a group called AWS Delegated Fine Grained Password Policy Administrators that has permissions to configure fine-grained password policies in AWS Managed Microsoft AD.

The right side of the diagram represents the on-premises AD directory. I created a global AD security group called On-premises fine grained password policy admins and I configured it so all members can manage fine grained password policies in my on-premises AD. I have two administrators in my company, John and Richard, who I added as members of On-premises fine grained password policy admins. I want to enable John and Richard to also manage fine grained password policies in my AWS Managed Microsoft AD.

While I could add John and Richard to the AWS Delegated Fine Grained Password Policy Administrators individually, I want a more efficient way to delegate and remove permissions for on-premises users to manage fine grained password policies in my AWS Managed Microsoft AD. In fact, I want to assign permissions to the same people that manage password policies in my on-premises directory.

Diagram showing delegation of administrative permissions to on-premises users

To do this, I will:

  1. As admin user, add the On-premises fine grained password policy admins as member of the AWS Delegated Fine Grained Password Policy Administrators security group from my Cloud Management machine.
  2. Manage who can administer password policies in my AWS Managed Microsoft AD directory by adding and removing users as members of the On-premises fine grained password policy admins. Doing so enables me to perform all my delegation work in my on-premises directory without the need to use a remote desktop protocol (RDP) session to my Cloud Management instance. In this case, Richard, who is a member of On-premises fine grained password policy admins group can now administer AWS Managed Microsoft AD directory from On-premises Management workstation.

Although I’m showing a specific case using fine grained password policy delegation, you can do this with any of the new AWS delegated groups and your on-premises groups and users.

Let’s get started.

Step 1 – Add on-premises groups to AWS delegated groups

In this step, open an RDP session to the Cloud Management instance and sign in as the admin user in your AWS Managed Microsoft AD directory. Then, add your users and groups from your on-premises AD to AWS delegated groups in AWS Managed Microsoft AD directory. In this example, I do the following:

  1. Sign in to the Cloud Management instance with the user name admin and the password that you set for the admin user when you created your directory.
  2. Open the Microsoft Windows Server Manager and navigate to Tools > Active Directory Users and Computers.
  3. Switch to the tree view and navigate to corp.example.com > AWS Delegated Groups. Right-click AWS Delegated Fine Grained Password Policy Administrators and select Properties.
  4. In the AWS Delegated Fine Grained Password Policy window, switch to Members tab and choose Add.
  5. In the Select Users, Contacts, Computers, Service Accounts, or Groups window, choose Locations.
  6. In the Locations window, select on-premises.com domain and choose OK.
  7. In the Enter the object names to select box, enter on-premises fine grained password policy admins and choose Check Names.
  8. Because I have a 1-way trust from AWS Managed Microsoft AD to my on-premises AD, Windows prompts me to enter credentials for an on-premises user account that has permissions to complete the search. If I had a 2-way trust and the admin account in my AWS Managed Microsoft AD has permissions to read my on-premises directory, Windows will not prompt me.In the Windows Security window, enter the credentials for an account with permissions for on-premises.com and choose OK.
  9. Click OK to add On-premises fine grained password policy admins group as a member of the AWS Delegated Fine Grained Password Policy Administrators group in your AWS Managed Microsoft AD directory.

At this point, any user that is a member of On-premises fine grained password policy admins group has permissions to manage password policies in your AWS Managed Microsoft AD directory.

Step 2 – Administer your AWS Managed Microsoft AD as on-premises user

Any member of the on-premises group(s) that you added to an AWS delegated group inherited the permissions of the AWS delegated group.

In this example, Richard signs in to the On-premises Management instance. Because Richard inherited permissions from Delegated Fine Grained Password Policy Administrators, he can now administer fine grained password policies in the AWS Managed Microsoft AD directory using on-premises credentials.

  1. Sign in to the On-premises Management instance as Richard.
  2. Open the Microsoft Windows Server Manager and navigate to Tools > Active Directory Users and Computers.
  3. Switch to the tree view, right-click Active Directory Users and Computers, and then select Change Domain.
  4. In the Change Domain window, enter corp.example.com, and then choose OK.
  5. You’ll be connected to your AWS Managed Microsoft AD domain:

Richard can now administer the password policies. Because John is also a member of the AWS delegated group, John can also perform password policy administration the same way.

In future, if Richard moves to another division within the company and you hire Judy as a replacement for Richard, you can simply remove Richard from On-premises fine grained password policy admins group and add Judy to this group. Richard will no longer have administrative permissions, while Judy can now administer password policies for your AWS Managed Microsoft AD directory.

Summary

We’ve tried to make it easier for you to administer your AWS Managed Microsoft AD directory by creating AWS delegated groups with domain local scope. You can add your on-premises AD groups to the AWS delegated groups. You can then control who can administer your directory by managing group membership in your on-premises AD directory. Your administrators can sign in to their existing on-premises workstations using their on-premises credentials and administer your AWS Managed Microsoft AD directory. I encourage you to explore the new AWS delegated security groups by using Active Directory Users and Computers from the management instance for your AWS Managed Microsoft AD. To learn more about AWS Directory Service, see the AWS Directory Service home page. If you have questions, please post them on the Directory Service forum. If you have comments about this post, submit them in the “Comments” section below.

 

EU Compliance Update: AWS’s 2017 C5 Assessment

Post Syndicated from Oliver Bell original https://aws.amazon.com/blogs/security/eu-compliance-update-awss-2017-c5-assessment/

C5 logo

AWS has completed its 2017 assessment against the Cloud Computing Compliance Controls Catalog (C5) information security and compliance program. Bundesamt für Sicherheit in der Informationstechnik (BSI)—Germany’s national cybersecurity authority—established C5 to define a reference standard for German cloud security requirements. With C5 (as well as with IT-Grundschutz), customers in German member states can use the work performed under this BSI audit to comply with stringent local requirements and operate secure workloads in the AWS Cloud.

Continuing our commitment to Germany and the AWS European Regions, AWS has added 16 services to this year’s scope:

The English version of the C5 report is available through AWS Artifact. The German version of the report will be available through AWS Artifact in the coming weeks.

– Oliver

How to Patch, Inspect, and Protect Microsoft Windows Workloads on AWS—Part 1

Post Syndicated from Koen van Blijderveen original https://aws.amazon.com/blogs/security/how-to-patch-inspect-and-protect-microsoft-windows-workloads-on-aws-part-1/

Most malware tries to compromise your systems by using a known vulnerability that the maker of the operating system has already patched. To help prevent malware from affecting your systems, two security best practices are to apply all operating system patches to your systems and actively monitor your systems for missing patches. In case you do need to recover from a malware attack, you should make regular backups of your data.

In today’s blog post (Part 1 of a two-part post), I show how to keep your Amazon EC2 instances that run Microsoft Windows up to date with the latest security patches by using Amazon EC2 Systems Manager. Tomorrow in Part 2, I show how to take regular snapshots of your data by using Amazon EBS Snapshot Scheduler and how to use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any common vulnerabilities and exposures (CVEs).

What you should know first

To follow along with the solution in this post, you need one or more EC2 instances. You may use existing instances or create new instances. For the blog post, I assume this is an EC2 for Microsoft Windows Server 2012 R2 instance installed from the Amazon Machine Images (AMIs). If you are not familiar with how to launch an EC2 instance, see Launching an Instance. I also assume you launched or will launch your instance in a private subnet. A private subnet is not directly accessible via the internet, and access to it requires either a VPN connection to your on-premises network or a jump host in a public subnet (a subnet with access to the internet). You must make sure that the EC2 instance can connect to the internet using a network address translation (NAT) instance or NAT gateway to communicate with Systems Manager and Amazon Inspector. The following diagram shows how you should structure your Amazon Virtual Private Cloud (VPC). You should also be familiar with Restoring an Amazon EBS Volume from a Snapshot and Attaching an Amazon EBS Volume to an Instance.

Later on, you will assign tasks to a maintenance window to patch your instances with Systems Manager. To do this, the AWS Identity and Access Management (IAM) user you are using for this post must have the iam:PassRole permission. This permission allows this IAM user to assign tasks to pass their own IAM permissions to the AWS service. In this example, when you assign a task to a maintenance window, IAM passes your credentials to Systems Manager. This safeguard ensures that the user cannot use the creation of tasks to elevate their IAM privileges because their own IAM privileges limit which tasks they can run against an EC2 instance. You should also authorize your IAM user to use EC2, Amazon Inspector, Amazon CloudWatch, and Systems Manager. You can achieve this by attaching the following AWS managed policies to the IAM user you are using for this example: AmazonInspectorFullAccess, AmazonEC2FullAccess, and AmazonSSMFullAccess.

Architectural overview

The following diagram illustrates the components of this solution’s architecture.

Diagram showing the components of this solution's architecture

For this blog post, Microsoft Windows EC2 is Amazon EC2 for Microsoft Windows Server 2012 R2 instances with attached Amazon Elastic Block Store (Amazon EBS) volumes, which are running in your VPC. These instances may be standalone Windows instances running your Windows workloads, or you may have joined them to an Active Directory domain controller. For instances joined to a domain, you can be using Active Directory running on an EC2 for Windows instance, or you can use AWS Directory Service for Microsoft Active Directory.

Amazon EC2 Systems Manager is a scalable tool for remote management of your EC2 instances. You will use the Systems Manager Run Command to install the Amazon Inspector agent. The agent enables EC2 instances to communicate with the Amazon Inspector service and run assessments, which I explain in detail later in this blog post. You also will create a Systems Manager association to keep your EC2 instances up to date with the latest security patches.

You can use the EBS Snapshot Scheduler to schedule automated snapshots at regular intervals. You will use it to set up regular snapshots of your Amazon EBS volumes. EBS Snapshot Scheduler is a prebuilt solution by AWS that you will deploy in your AWS account. With Amazon EBS snapshots, you pay only for the actual data you store. Snapshots save only the data that has changed since the previous snapshot, which minimizes your cost.

You will use Amazon Inspector to run security assessments on your EC2 for Windows Server instance. In this post, I show how to assess if your EC2 for Windows Server instance is vulnerable to any of the more than 50,000 CVEs registered with Amazon Inspector.

In today’s and tomorrow’s posts, I show you how to:

  1. Launch an EC2 instance with an IAM role, Amazon EBS volume, and tags that Systems Manager and Amazon Inspector will use.
  2. Configure Systems Manager to install the Amazon Inspector agent and patch your EC2 instances.
  3. Take EBS snapshots by using EBS Snapshot Scheduler to automate snapshots based on instance tags.
  4. Use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any common vulnerabilities and exposures (CVEs).

Step 1: Launch an EC2 instance

In this section, I show you how to launch your EC2 instances so that you can use Systems Manager with the instances and use instance tags with EBS Snapshot Scheduler to automate snapshots. This requires three things:

  • Create an IAM role for Systems Manager before launching your EC2 instance.
  • Launch your EC2 instance with Amazon EBS and the IAM role for Systems Manager.
  • Add tags to instances so that you can automate policies for which instances you take snapshots of and when.

Create an IAM role for Systems Manager

Before launching your EC2 instance, I recommend that you first create an IAM role for Systems Manager, which you will use to update the EC2 instance you will launch. AWS already provides a preconfigured policy that you can use for your new role, and it is called AmazonEC2RoleforSSM.

  1. Sign in to the IAM console and choose Roles in the navigation pane. Choose Create new role.
    Screenshot of choosing "Create role"
  2. In the role-creation workflow, choose AWS service > EC2 > EC2 to create a role for an EC2 instance.
    Screenshot of creating a role for an EC2 instance
  3. Choose the AmazonEC2RoleforSSM policy to attach it to the new role you are creating.
    Screenshot of attaching the AmazonEC2RoleforSSM policy to the new role you are creating
  4. Give the role a meaningful name (I chose EC2SSM) and description, and choose Create role.
    Screenshot of giving the role a name and description

Launch your EC2 instance

To follow along, you need an EC2 instance that is running Microsoft Windows Server 2012 R2 and that has an Amazon EBS volume attached. You can use any existing instance you may have or create a new instance.

When launching your new EC2 instance, be sure that:

  • The operating system is Microsoft Windows Server 2012 R2.
  • You attach at least one Amazon EBS volume to the EC2 instance.
  • You attach the newly created IAM role (EC2SSM).
  • The EC2 instance can connect to the internet through a network address translation (NAT) gateway or a NAT instance.
  • You create the tags shown in the following screenshot (you will use them later).

If you are using an already launched EC2 instance, you can attach the newly created role as described in Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console.

Add tags

The final step of configuring your EC2 instances is to add tags. You will use these tags to configure Systems Manager in Step 2 of this blog post and to configure Amazon Inspector in Part 2. For this example, I add a tag key, Patch Group, and set the value to Windows Servers. I could have other groups of EC2 instances that I treat differently by having the same tag key but a different tag value. For example, I might have a collection of other servers with the Patch Group tag key with a value of IAS Servers.

Screenshot of adding tags

Note: You must wait a few minutes until the EC2 instance becomes available before you can proceed to the next section.

At this point, you now have at least one EC2 instance you can use to configure Systems Manager, use EBS Snapshot Scheduler, and use Amazon Inspector.

Note: If you have a large number of EC2 instances to tag, you may want to use the EC2 CreateTags API rather than manually apply tags to each instance.

Step 2: Configure Systems Manager

In this section, I show you how to use Systems Manager to apply operating system patches to your EC2 instances, and how to manage patch compliance.

To start, I will provide some background information about Systems Manager. Then, I will cover how to:

  • Create the Systems Manager IAM role so that Systems Manager is able to perform patch operations.
  • Associate a Systems Manager patch baseline with your instance to define which patches Systems Manager should apply.
  • Define a maintenance window to make sure Systems Manager patches your instance when you tell it to.
  • Monitor patch compliance to verify the patch state of your instances.

Systems Manager is a collection of capabilities that helps you automate management tasks for AWS-hosted instances on EC2 and your on-premises servers. In this post, I use Systems Manager for two purposes: to run remote commands and apply operating system patches. To learn about the full capabilities of Systems Manager, see What Is Amazon EC2 Systems Manager?

Patch management is an important measure to prevent malware from infecting your systems. Most malware attacks look for vulnerabilities that are publicly known and in most cases are already patched by the maker of the operating system. These publicly known vulnerabilities are well documented and therefore easier for an attacker to exploit than having to discover a new vulnerability.

Patches for these new vulnerabilities are available through Systems Manager within hours after Microsoft releases them. There are two prerequisites to use Systems Manager to apply operating system patches. First, you must attach the IAM role you created in the previous section, EC2SSM, to your EC2 instance. Second, you must install the Systems Manager agent on your EC2 instance. If you have used a recent Microsoft Windows Server 2012 R2 AMI published by AWS, Amazon has already installed the Systems Manager agent on your EC2 instance. You can confirm this by logging in to an EC2 instance and looking for Amazon SSM Agent under Programs and Features in Windows. To install the Systems Manager agent on an instance that does not have the agent preinstalled or if you want to use the Systems Manager agent on your on-premises servers, see the documentation about installing the Systems Manager agent. If you forgot to attach the newly created role when launching your EC2 instance or if you want to attach the role to already running EC2 instances, see Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI or use the AWS Management Console.

To make sure your EC2 instance receives operating system patches from Systems Manager, you will use the default patch baseline provided and maintained by AWS, and you will define a maintenance window so that you control when your EC2 instances should receive patches. For the maintenance window to be able to run any tasks, you also must create a new role for Systems Manager. This role is a different kind of role than the one you created earlier: Systems Manager will use this role instead of EC2. Earlier we created the EC2SSM role with the AmazonEC2RoleforSSM policy, which allowed the Systems Manager agent on our instance to communicate with the Systems Manager service. Here we need a new role with the policy AmazonSSMMaintenanceWindowRole to make sure the Systems Manager service is able to execute commands on our instance.

Create the Systems Manager IAM role

To create the new IAM role for Systems Manager, follow the same procedure as in the previous section, but in Step 3, choose the AmazonSSMMaintenanceWindowRole policy instead of the previously selected AmazonEC2RoleforSSM policy.

Screenshot of creating the new IAM role for Systems Manager

Finish the wizard and give your new role a recognizable name. For example, I named my role MaintenanceWindowRole.

Screenshot of finishing the wizard and giving your new role a recognizable name

By default, only EC2 instances can assume this new role. You must update the trust policy to enable Systems Manager to assume this role.

To update the trust policy associated with this new role:

  1. Navigate to the IAM console and choose Roles in the navigation pane.
  2. Choose MaintenanceWindowRole and choose the Trust relationships tab. Then choose Edit trust relationship.
  3. Update the policy document by copying the following policy and pasting it in the Policy Document box. As you can see, I have added the ssm.amazonaws.com service to the list of allowed Principals that can assume this role. Choose Update Trust Policy.
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Sid":"",
             "Effect":"Allow",
             "Principal":{
                "Service":[
                   "ec2.amazonaws.com",
                   "ssm.amazonaws.com"
               ]
             },
             "Action":"sts:AssumeRole"
          }
       ]
    }

Associate a Systems Manager patch baseline with your instance

Next, you are going to associate a Systems Manager patch baseline with your EC2 instance. A patch baseline defines which patches Systems Manager should apply. You will use the default patch baseline that AWS manages and maintains. Before you can associate the patch baseline with your instance, though, you must determine if Systems Manager recognizes your EC2 instance.

Navigate to the EC2 console, scroll down to Systems Manager Shared Resources in the navigation pane, and choose Managed Instances. Your new EC2 instance should be available there. If your instance is missing from the list, verify the following:

  1. Go to the EC2 console and verify your instance is running.
  2. Select your instance and confirm you attached the Systems Manager IAM role, EC2SSM.
  3. Make sure that you deployed a NAT gateway in your public subnet to ensure your VPC reflects the diagram at the start of this post so that the Systems Manager agent can connect to the Systems Manager internet endpoint.
  4. Check the Systems Manager Agent logs for any errors.

Now that you have confirmed that Systems Manager can manage your EC2 instance, it is time to associate the AWS maintained patch baseline with your EC2 instance:

  1. Choose Patch Baselines under Systems Manager Services in the navigation pane of the EC2 console.
  2. Choose the default patch baseline as highlighted in the following screenshot, and choose Modify Patch Groups in the Actions drop-down.
    Screenshot of choosing Modify Patch Groups in the Actions drop-down
  3. In the Patch group box, enter the same value you entered under the Patch Group tag of your EC2 instance in “Step 1: Configure your EC2 instance.” In this example, the value I enter is Windows Servers. Choose the check mark icon next to the patch group and choose Close.Screenshot of modifying the patch group

Define a maintenance window

Now that you have successfully set up a role and have associated a patch baseline with your EC2 instance, you will define a maintenance window so that you can control when your EC2 instances should receive patches. By creating multiple maintenance windows and assigning them to different patch groups, you can make sure your EC2 instances do not all reboot at the same time. The Patch Group resource tag you defined earlier will determine to which patch group an instance belongs.

To define a maintenance window:

  1. Navigate to the EC2 console, scroll down to Systems Manager Shared Resources in the navigation pane, and choose Maintenance Windows. Choose Create a Maintenance Window.
    Screenshot of starting to create a maintenance window in the Systems Manager console
  2. Select the Cron schedule builder to define the schedule for the maintenance window. In the example in the following screenshot, the maintenance window will start every Saturday at 10:00 P.M. UTC.
  3. To specify when your maintenance window will end, specify the duration. In this example, the four-hour maintenance window will end on the following Sunday morning at 2:00 A.M. UTC (in other words, four hours after it started).
  4. Systems manager completes all tasks that are in process, even if the maintenance window ends. In my example, I am choosing to prevent new tasks from starting within one hour of the end of my maintenance window because I estimated my patch operations might take longer than one hour to complete. Confirm the creation of the maintenance window by choosing Create maintenance window.
    Screenshot of completing all boxes in the maintenance window creation process
  5. After creating the maintenance window, you must register the EC2 instance to the maintenance window so that Systems Manager knows which EC2 instance it should patch in this maintenance window. To do so, choose Register new targets on the Targets tab of your newly created maintenance window. You can register your targets by using the same Patch Group tag you used before to associate the EC2 instance with the AWS-provided patch baseline.
    Screenshot of registering new targets
  6. Assign a task to the maintenance window that will install the operating system patches on your EC2 instance:
    1. Open Maintenance Windows in the EC2 console, select your previously created maintenance window, choose the Tasks tab, and choose Register run command task from the Register new task drop-down.
    2. Choose the AWS-RunPatchBaseline document from the list of available documents.
    3. For Parameters:
      1. For Role, choose the role you created previously (called MaintenanceWindowRole).
      2. For Execute on, specify how many EC2 instances Systems Manager should patch at the same time. If you have a large number of EC2 instances and want to patch all EC2 instances within the defined time, make sure this number is not too low. For example, if you have 1,000 EC2 instances, a maintenance window of 4 hours, and 2 hours’ time for patching, make this number at least 500.
      3. For Stop after, specify after how many errors Systems Manager should stop.
      4. For Operation, choose Install to make sure to install the patches.
        Screenshot of stipulating maintenance window parameters

Now, you must wait for the maintenance window to run at least once according to the schedule you defined earlier. Note that if you don’t want to wait, you can adjust the schedule to run sooner by choosing Edit maintenance window on the Maintenance Windows page of Systems Manager. If your maintenance window has expired, you can check the status of any maintenance tasks Systems Manager has performed on the Maintenance Windows page of Systems Manager and select your maintenance window.

Screenshot of the maintenance window successfully created

Monitor patch compliance

You also can see the overall patch compliance of all EC2 instances that are part of defined patch groups by choosing Patch Compliance under Systems Manager Services in the navigation pane of the EC2 console. You can filter by Patch Group to see how many EC2 instances within the selected patch group are up to date, how many EC2 instances are missing updates, and how many EC2 instances are in an error state.

Screenshot of monitoring patch compliance

In this section, you have set everything up for patch management on your instance. Now you know how to patch your EC2 instance in a controlled manner and how to check if your EC2 instance is compliant with the patch baseline you have defined. Of course, I recommend that you apply these steps to all EC2 instances you manage.

Summary

In Part 1 of this blog post, I have shown how to configure EC2 instances for use with Systems Manager, EBS Snapshot Scheduler, and Amazon Inspector. I also have shown how to use Systems Manager to keep your Microsoft Windows–based EC2 instances up to date. In Part 2 of this blog post tomorrow, I will show how to take regular snapshots of your data by using EBS Snapshot Scheduler and how to use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any CVEs.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the EC2 forum or the Amazon Inspector forum, or contact AWS Support.

– Koen

How AWS Managed Microsoft AD Helps to Simplify the Deployment and Improve the Security of Active Directory–Integrated .NET Applications

Post Syndicated from Peter Pereira original https://aws.amazon.com/blogs/security/how-aws-managed-microsoft-ad-helps-to-simplify-the-deployment-and-improve-the-security-of-active-directory-integrated-net-applications/

Companies using .NET applications to access sensitive user information, such as employee salary, Social Security Number, and credit card information, need an easy and secure way to manage access for users and applications.

For example, let’s say that your company has a .NET payroll application. You want your Human Resources (HR) team to manage and update the payroll data for all the employees in your company. You also want your employees to be able to see their own payroll information in the application. To meet these requirements in a user-friendly and secure way, you want to manage access to the .NET application by using your existing Microsoft Active Directory identities. This enables you to provide users with single sign-on (SSO) access to the .NET application and to manage permissions using Active Directory groups. You also want the .NET application to authenticate itself to access the database, and to limit access to the data in the database based on the identity of the application user.

Microsoft Active Directory supports these requirements through group Managed Service Accounts (gMSAs) and Kerberos constrained delegation (KCD). AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, enables you to manage gMSAs and KCD through your administrative account, helping you to migrate and develop .NET applications that need these native Active Directory features.

In this blog post, I give an overview of how to use AWS Managed Microsoft AD to manage gMSAs and KCD and demonstrate how you can configure a gMSA and KCD in six steps for a .NET application:

  1. Create your AWS Managed Microsoft AD.
  2. Create your Amazon RDS for SQL Server database.
  3. Create a gMSA for your .NET application.
  4. Deploy your .NET application.
  5. Configure your .NET application to use the gMSA.
  6. Configure KCD for your .NET application.

Solution overview

The following diagram shows the components of a .NET application that uses Amazon RDS for SQL Server with a gMSA and KCD. The diagram also illustrates authentication and access and is numbered to show the six key steps required to use a gMSA and KCD. To deploy this solution, the AWS Managed Microsoft AD directory must be in the same Amazon Virtual Private Cloud (VPC) as RDS for SQL Server. For this example, my company name is Example Corp., and my directory uses the domain name, example.com.

Diagram showing the components of a .NET application that uses Amazon RDS for SQL Server with a gMSA and KCD

Deploy the solution

The following six steps (numbered to correlate with the preceding diagram) walk you through configuring and using a gMSA and KCD.

1. Create your AWS Managed Microsoft AD directory

Using the Directory Service console, create your AWS Managed Microsoft AD directory in your Amazon VPC. In my example, my domain name is example.com.

Image of creating an AWS Managed Microsoft AD directory in an Amazon VPC

2. Create your Amazon RDS for SQL Server database

Using the RDS console, create your Amazon RDS for SQL Server database instance in the same Amazon VPC where your directory is running, and enable Windows Authentication. To enable Windows Authentication, select your directory in the Microsoft SQL Server Windows Authentication section in the Configure Advanced Settings step of the database creation workflow (see the following screenshot).

In my example, I create my Amazon RDS for SQL Server db-example database, and enable Windows Authentication to allow my db-example database to authenticate against my example.com directory.

Screenshot of configuring advanced settings

3. Create a gMSA for your .NET application

Now that you have deployed your directory, database, and application, you can create a gMSA for your .NET application.

To perform the next steps, you must install the Active Directory administration tools on a Windows server that is joined to your AWS Managed Microsoft AD directory domain. If you do not have a Windows server joined to your directory domain, you can deploy a new Amazon EC2 for Microsoft Windows Server instance and join it to your directory domain.

To create a gMSA for your .NET application:

  1. Log on to the instance on which you installed the Active Directory administration tools by using a user that is a member of the Admins security group or the Managed Service Accounts Admins security group in your organizational unit (OU). For my example, I use the Admin user in the example OU.

Screenshot of logging on to the instance on which you installed the Active Directory administration tools

  1. Identify which .NET application servers (hosts) will run your .NET application. Create a new security group in your OU and add your .NET application servers as members of this new group. This allows a group of application servers to use a single gMSA, instead of creating one gMSA for each server. In my example, I create a group, App_server_grp, in my example OU. I also add Appserver1, which is my .NET application server computer name, as a member of this new group.

Screenshot of creating a new security group

  1. Create a gMSA in your directory by running Windows PowerShell from the Start menu. The basic syntax to create the gMSA at the Windows PowerShell command prompt follows.
    PS C:\Users\admin> New-ADServiceAccount -name [gMSAname] -DNSHostName [domainname] -PrincipalsAllowedToRetrieveManagedPassword [AppServersSecurityGroup] -TrustedForDelegation $truedn <Enter>

    In my example, the gMSAname is gMSAexample, the DNSHostName is example.com, and the PrincipalsAllowedToRetrieveManagedPassword is the recently created security group, App_server_grp.

    PS C:\Users\admin> New-ADServiceAccount -name gMSAexample -DNSHostName example.com -PrincipalsAllowedToRetrieveManagedPassword App_server_grp -TrustedForDelegation $truedn <Enter>

    To confirm you created the gMSA, you can run the Get-ADServiceAccount command from the PowerShell command prompt.

    PS C:\Users\admin> Get-ADServiceAccount gMSAexample <Enter>
    
    DistinguishedName : CN=gMSAexample,CN=Managed Service Accounts,DC=example,DC=com
    Enabled           : True
    Name              : gMSAexample
    ObjectClass       : msDS-GroupManagedServiceAccount
    ObjectGUID        : 24d8b68d-36d5-4dc3-b0a9-edbbb5dc8a5b
    SamAccountName    : gMSAexample$
    SID               : S-1-5-21-2100421304-991410377-951759617-1603
    UserPrincipalName :

    You also can confirm you created the gMSA by opening the Active Directory Users and Computers utility located in your Administrative Tools folder, expand the domain (example.com in my case), and expand the Managed Service Accounts folder.
    Screenshot of confirming the creation of the gMSA

4. Deploy your .NET application

Deploy your .NET application on IIS on Amazon EC2 for Windows Server instances. For this step, I assume you are the application’s expert and already know how to deploy it. Make sure that all of your instances are joined to your directory.

5. Configure your .NET application to use the gMSA

You can configure your .NET application to use the gMSA to enforce strong password security policy and ensure password rotation of your service account. This helps to improve the security and simplify the management of your .NET application. Configure your .NET application in two steps:

  1. Grant to gMSA the required permissions to run your .NET application in the respective application folders. This is a critical step because when you change the application pool identity account to use gMSA, downtime can occur if the gMSA does not have the application’s required permissions. Therefore, make sure you first test the configurations in your development and test environments.
  2. Configure your application pool identity on IIS to use the gMSA as the service account. When you configure a gMSA as the service account, you include the $ at the end of the gMSA name. You do not need to provide a password because AWS Managed Microsoft AD automatically creates and rotates the password. In my example, my service account is gMSAexample$, as shown in the following screenshot.

Screenshot of configuring application pool identity

You have completed all the steps to use gMSA to create and rotate your .NET application service account password! Now, you will configure KCD for your .NET application.

6. Configure KCD for your .NET application

You now are ready to allow your .NET application to have access to other services by using the user identity’s permissions instead of the application service account’s permissions. Note that KCD and gMSA are independent features, which means you do not have to create a gMSA to use KCD. For this example, I am using both features to show how you can use them together. To configure a regular service account such as a user or local built-in account, see the Kerberos constrained delegation with ASP.NET blog post on MSDN.

In my example, my goal is to delegate to the gMSAexample account the ability to enforce the user’s permissions to my db-example SQL Server database, instead of the gMSAexample account’s permissions. For this, I have to update the msDS-AllowedToDelegateTo gMSA attribute. The value for this attribute is the service principal name (SPN) of the service instance that you are targeting, which in this case is the db-example Amazon RDS for SQL Server database.

The SPN format for the msDS-AllowedToDelegateTo attribute is a combination of the service class, the Kerberos authentication endpoint, and the port number. The Amazon RDS for SQL Server Kerberos authentication endpoint format is [database_name].[domain_name]. The value for my msDS-AllowedToDelegateTo attribute is MSSQLSvc/db-example.example.com:1433, where MSSQLSvc and 1433 are the SQL Server Database service class and port number standards, respectively.

Follow these steps to perform the msDS-AllowedToDelegateTo gMSA attribute configuration:

  1. Log on to your Active Directory management instance with a user identity that is a member of the Kerberos Delegation Admins security group. In this case, I will use admin.
  2. Open the Active Directory Users and Groups utility located in your Administrative Tools folder, choose View, and then choose Advanced Features.
  3. Expand your domain name (example.com in this example), and then choose the Managed Service Accounts security group. Right-click the gMSA account for the application pool you want to enable for Kerberos delegation, choose Properties, and choose the Attribute Editor tab.
  4. Search for the msDS-AllowedToDelegateTo attribute on the Attribute Editor tab and choose Edit.
  5. Enter the MSSQLSvc/db-example.example.com:1433 value and choose Add.
    Screenshot of entering the value of the multi-valued string
  6. Choose OK and Apply, and your KCD configuration is complete.

Congratulations! At this point, your application is using a gMSA rather than an embedded static user identity and password, and the application is able to access SQL Server using the identity of the application user. The gMSA eliminates the need for you to rotate the application’s password manually, and it allows you to better scope permissions for the application. When you use KCD, you can enforce access to your database consistently based on user identities at the database level, which prevents improper access that might otherwise occur because of an application error.

Summary

In this blog post, I demonstrated how to simplify the deployment and improve the security of your .NET application by using a group Managed Service Account and Kerberos constrained delegation with your AWS Managed Microsoft AD directory. I also outlined the main steps to get your .NET environment up and running on a managed Active Directory and SQL Server infrastructure. This approach will make it easier for you to build new .NET applications in the AWS Cloud or migrate existing ones in a more secure way.

For additional information about using group Managed Service Accounts and Kerberos constrained delegation with your AWS Managed Microsoft AD directory, see the AWS Directory Service documentation.

To learn more about AWS Directory Service, see the AWS Directory Service home page. If you have questions about this post or its solution, start a new thread on the Directory Service forum.

– Peter

Use the New Visual Editor to Create and Modify Your AWS IAM Policies

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/use-the-new-visual-editor-to-create-and-modify-your-aws-iam-policies/

Today, AWS Identity and Access Management (IAM) made it easier for you to create and modify your IAM policies by using a point-and-click visual editor in the IAM console. The new visual editor guides you through granting permissions for IAM policies without requiring you to write policies in JSON (although you can still author and edit policies in JSON, if you prefer). This update to the IAM console makes it easier to grant least privilege for the AWS service actions you select by listing all the supported resource types and request conditions you can specify. Policy summaries identify unrecognized services and actions and permissions errors when you import existing policies, and now you can use the visual editor to correct them. In this blog post, I give a brief overview of policy concepts and show you how to create a new policy by using the visual editor.

IAM policy concepts

You use IAM policies to define permissions for your IAM entities (groups, users, and roles). Policies are composed of one or more statements that include the following elements:

  • Effect: Determines if a policy statement allows or explicitly denies access.
  • Action: Defines AWS service actions in a policy (these typically map to individual AWS APIs.)
  • Resource: Defines the AWS resources to which actions can apply. The defined resources must be supported by the actions defined in the Action element for permissions to be granted.
  • Condition: Defines when a permission is allowed or denied. The conditions defined in a policy must be supported by the actions defined in the Action element for the permission to be granted.

To grant permissions, you attach policies to groups, users, or roles. Now that I have reviewed the elements of a policy, I will demonstrate how to create an IAM policy with the visual editor.

How to create an IAM policy with the visual editor

Let’s say my human resources (HR) recruiter, Casey, needs to review files located in an Amazon S3 bucket for all the product manager (PM) candidates our HR team has interviewed in 2017. To grant this access, I will create and attach a policy to Casey that grants list and limited read access to all folders that begin with PM_Candidate in the pmrecruiting2017 S3 bucket. To create this new policy, I navigate to the Policies page in the IAM console and choose Create policy. Note that I could also use the visual editor to modify existing policies by choosing Import existing policy; however, for Casey, I will create a new policy.

Image of the "Create policy" button

On the Visual editor tab, I see a section that includes Service, Actions, Resources, and Request Conditions.

Image of the "Visual editor" tab

Select a service

To grant S3 permissions, I choose Select a service, type S3 in the search box, and choose S3 from the list.

Image of choosing "S3"

Select actions

After selecting S3, I can define actions for Casey by using one of four options:

  1. Filter actions in the service by using the search box.
  2. Type actions by choosing Add action next to Manual actions. For example, I can type List* to grant all S3 actions that begin with List*.
  3. Choose access levels from List, Read, Write, Permissions management, and Tagging.
  4. Select individual actions by expanding each access level.

In the following screenshot, I choose options 3 and 4, and choose List and s3:GetObject from the Read access level.

Screenshot of options in the "Select actions" section

We introduced access levels when we launched policy summaries earlier in 2017. Access levels give you a way to categorize actions and help you understand the permissions in a policy. The following table gives you a quick overview of access levels.

Access levelDescriptionExample actions
ListActions that allow you to see a list of resourcess3:ListBucket, s3:ListAllMyBuckets
ReadActions that allow you to read the content in resourcess3:GetObject, s3:GetBucketTagging
WriteActions that allow you to create, delete, or modify resourcess3:PutObject, s3:DeleteBucket
Permissions managementActions that allow you to grant or modify permissions to resourcess3:PutBucketPolicy
TaggingActions that allow you to create, delete, or modify tags
Note: Some services support authorization based on tags.
s3:PutBucketTagging, s3:DeleteObjectVersionTagging

Note: By default, all actions you choose will be allowed. To deny actions, choose Switch to deny permissions in the upper right corner of the Actions section.

As shown in the preceding screenshot, if I choose the question mark icon next to GetObject, I can see the description and supported resources and conditions for this action, which can help me scope permissions.

Screenshot of GetObject

The visual editor makes it easy to decide which actions I should select by providing in an integrated documentation panel the action description, supported resources or conditions, and any required actions for every AWS service action. Some AWS service actions have required actions, which are other AWS service actions that need to be granted in a policy for an action to run. For example, the AWS Directory Service action, ds:CreateDirectory, requires seven Amazon EC2 actions to be able to create a Directory Service directory.

Choose resources

In the Resources section, I can choose the resources on which actions can be taken. I choose Resources and see two ways that I can define or select resources:

  1. Define specific resources
  2. Select all resources

Specific is the default option, and only the applicable resources are presented based on the service and actions I chose previously. Because I want to grant Casey access to some objects in a specific bucket, I choose Specific and choose Add ARN under bucket.

Screenshot of Resources section

In the pop-up, I type the bucket name, pmrecruiting2017, and choose Add to specify the S3 bucket resource.

Screenshot of specifying the S3 bucket resource

To specify the objects, I choose Add ARN under object and grant Casey access to all objects starting with PM_Candidate in the pmrecruiting2017 bucket. The visual editor helps you build your Amazon Resource Name (ARN) and validates that it is structured correctly. For AWS services that are AWS Region specific, the visual editor prompts for AWS Region and account number.

The visual editor displays all applicable resources in the Resources section based on the actions I choose. For Casey, I defined an S3 bucket and object in the Resources section. In this example, when the visual editor creates the policy, it creates three statements. The first statement includes all actions that require a wildcard (*) for the Resource element because this action does not support resource-level permissions. The second statement includes all S3 actions that support an S3 bucket. The third statement includes all actions that support an S3 object resource. The visual editor generates policy syntax for you based on supported permissions in AWS services.

Specify request conditions

For additional security, I specify a condition to restrict access to the S3 bucket from inside our internal network. To do this, I choose Specify request conditions in the Request Conditions section, and choose the Source IP check box. A condition is composed of a condition key, an operator, and a value. I choose aws:SourceIp for my Key so that I can control from where the S3 files can be accessed. By default, IpAddress is the Operator, and I set the Value to my internal network.

Screenshot of "Request conditions" section

To add other conditions, choose Add condition and choose Save changes after choosing the key, operator, and value.

After specifying my request condition, I am now able to review all the elements of these S3 permissions.

Screenshot of S3 permissions

Next, I can choose to grant permissions for another service by choosing Add new permissions (bottom left of preceding screenshot), or I can review and create this new policy. Because I have granted all the permissions Casey needs, I choose Review policy. I type a name and a description, and I review the policy summary before choosing Create policy. 

Now that I have created the policy, I attach it to Casey by choosing the Attached entities tab of the policy I just created. I choose Attach and choose Casey. I then choose Attach policy. Casey should now be able to access the interview files she needs to review.

Summary

The visual editor makes it easier to create and modify your IAM policies by guiding you through each element of the policy. The visual editor helps you define resources and request conditions so that you can grant least privilege and generate policies. To start using the visual editor, sign in to the IAM console, navigate to the Policies page, and choose Create policy.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

– Joy

Updated AWS SOC Reports Are Now Available with 19 Additional Services in Scope

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/updated-aws-soc-reports-are-now-available-with-19-additional-services-in-scope/

AICPA SOC logo

Newly updated reports are available for AWS System and Organization Control Report 1 (SOC 1), formerly called AWS Service Organization Control Report 1, and AWS SOC 2: Security, Availability, & Confidentiality Report. You can download both reports for free and on demand in the AWS Management Console through AWS Artifact. The updated AWS SOC 3: Security, Availability, & Confidentiality Report also was just released. All three reports cover April 1, 2017, through September 30, 2017.

With the addition of the following 19 services, AWS now supports 51 SOC-compliant AWS services and is committed to increasing the number:

  • Amazon API Gateway
  • Amazon Cloud Directory
  • Amazon CloudFront
  • Amazon Cognito
  • Amazon Connect
  • AWS Directory Service for Microsoft Active Directory
  • Amazon EC2 Container Registry
  • Amazon EC2 Container Service
  • Amazon EC2 Systems Manager
  • Amazon Inspector
  • AWS IoT Platform
  • Amazon Kinesis Streams
  • AWS Lambda
  • AWS [email protected]
  • AWS Managed Services
  • Amazon S3 Transfer Acceleration
  • AWS Shield
  • AWS Step Functions
  • AWS WAF

With this release, we also are introducing a separate spreadsheet, eliminating the need to extract the information from multiple PDFs.

If you are not yet an AWS customer, contact AWS Compliance to access the SOC Reports.

– Chad

Now Better Together! Register for and Attend this November 15 Tech Talk: “How to Integrate AWS Directory Service with Office 365”

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/now-better-together-register-for-and-attend-this-november-15-tech-talk-how-to-integrate-aws-directory-service-with-office-365/

AWS Online Tech Talks banner

As part of the AWS Online Tech Talks series, AWS will present How to Integrate AWS Directory Service with Office 365 on Wednesday, November 15. This tech talk will start at 9:00 A.M. Pacific Time and end at 9:40 A.M. Pacific Time.

If you want to support Active Directory–aware workloads in AWS and Office 365 simultaneously using a managed Active Directory in the cloud, you need a nonintuitive integration to synchronize identities between deployments. AWS has recently introduced the ability for you to authenticate your Office 365 permissions using AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) by using a custom configuration of Active Directory Federation Services (AD FS). In this webinar, AWS Directory Service Product Manager Ron Cully shows how to configure your AWS Managed Microsoft AD environment to synchronize with Office 365. He will provide detailed configuration settings, architectural considerations, and deployment steps for a highly available, secure, and easy-to-manage solution in the AWS Cloud.

You also will learn how to:

  • Deploy AWS Managed Microsoft AD.
  • Deploy Microsoft Azure AD Connect and AD FS with AWS Managed Microsoft AD.
  • Authenticate user access to Office 365 by using AWS Managed Microsoft AD.

This tech talk is free. Register today.

– Craig

AWS Online Tech Talks – November 2017

Post Syndicated from Sara Rodas original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-november-2017/

Leaves are crunching under my boots, Halloween is tomorrow, and pumpkin is having its annual moment in the sun – it’s fall everybody! And just in time to celebrate, we have whipped up a fresh batch of pumpkin spice Tech Talks. Grab your planner (Outlook calendar) and pencil these puppies in. This month we are covering re:Invent, serverless, and everything in between.

November 2017 – Schedule

Noted below are the upcoming scheduled live, online technical sessions being held during the month of November. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.

Webinars featured this month are:

Monday, November 6

Compute

9:00 – 9:40 AM PDT: Set it and Forget it: Auto Scaling Target Tracking Policies

Tuesday, November 7

Big Data

9:00 – 9:40 AM PDT: Real-time Application Monitoring with Amazon Kinesis and Amazon CloudWatch

Compute

10:30 – 11:10 AM PDT: Simplify Microsoft Windows Server Management with Amazon Lightsail

Mobile

12:00 – 12:40 PM PDT: Deep Dive on Amazon SES What’s New

Wednesday, November 8

Databases

10:30 – 11:10 AM PDT: Migrating Your Oracle Database to PostgreSQL

Compute

12:00 – 12:40 PM PDT: Run Your CI/CD Pipeline at Scale for a Fraction of the Cost

Thursday, November 9

Databases

10:30 – 11:10 AM PDT: Migrating Your Oracle Database to PostgreSQL

Containers

9:00 – 9:40 AM PDT: Managing Container Images with Amazon ECR

Big Data

12:00 – 12:40 PM PDT: Amazon Elasticsearch Service Security Deep Dive

Monday, November 13

re:Invent

10:30 – 11:10 AM PDT: AWS re:Invent 2017: Know Before You Go

5:00 – 5:40 PM PDT: AWS re:Invent 2017: Know Before You Go

Tuesday, November 14

AI

9:00 – 9:40 AM PDT: Sentiment Analysis Using Apache MXNet and Gluon

10:30 – 11:10 AM PDT: Bringing Characters to Life with Amazon Polly Text-to-Speech

IoT

12:00 – 12:40 PM PDT: Essential Capabilities of an IoT Cloud Platform

Enterprise

2:00 – 2:40 PM PDT: Everything you wanted to know about licensing Windows workloads on AWS, but were afraid to ask

Wednesday, November 15

Security & Identity

9:00 – 9:40 AM PDT: How to Integrate AWS Directory Service with Office365

Storage

10:30 – 11:10 AM PDT: Disaster Recovery Options with AWS

Hands on Lab

12:30 – 2:00 PM PDT: Hands on Lab: Windows Workloads

Thursday, November 16

Serverless

9:00 – 9:40 AM PDT: Building Serverless Websites with [email protected]

Hands on Lab

12:30 – 2:00 PM PDT: Hands on Lab: Deploy .NET Code to AWS from Visual Studio

– Sara

Introducing AWS Directory Service for Microsoft Active Directory (Standard Edition)

Post Syndicated from Peter Pereira original https://aws.amazon.com/blogs/security/introducing-aws-directory-service-for-microsoft-active-directory-standard-edition/

Today, AWS introduced AWS Directory Service for Microsoft Active Directory (Standard Edition), also known as AWS Microsoft AD (Standard Edition), which is managed Microsoft Active Directory (AD) that is performance optimized for small and midsize businesses. AWS Microsoft AD (Standard Edition) offers you a highly available and cost-effective primary directory in the AWS Cloud that you can use to manage users, groups, and computers. It enables you to join Amazon EC2 instances to your domain easily and supports many AWS and third-party applications and services. It also can support most of the common use cases of small and midsize businesses. When you use AWS Microsoft AD (Standard Edition) as your primary directory, you can manage access and provide single sign-on (SSO) to cloud applications such as Microsoft Office 365. If you have an existing Microsoft AD directory, you can also use AWS Microsoft AD (Standard Edition) as a resource forest that contains primarily computers and groups, allowing you to migrate your AD-aware applications to the AWS Cloud while using existing on-premises AD credentials.

In this blog post, I help you get started by answering three main questions about AWS Microsoft AD (Standard Edition):

  1. What do I get?
  2. How can I use it?
  3. What are the key features?

After answering these questions, I show how you can get started with creating and using your own AWS Microsoft AD (Standard Edition) directory.

1. What do I get?

When you create an AWS Microsoft AD (Standard Edition) directory, AWS deploys two Microsoft AD domain controllers powered by Microsoft Windows Server 2012 R2 in your Amazon Virtual Private Cloud (VPC). To help deliver high availability, the domain controllers run in different Availability Zones in the AWS Region of your choice.

As a managed service, AWS Microsoft AD (Standard Edition) configures directory replication, automates daily snapshots, and handles all patching and software updates. In addition, AWS Microsoft AD (Standard Edition) monitors and automatically recovers domain controllers in the event of a failure.

AWS Microsoft AD (Standard Edition) has been optimized as a primary directory for small and midsize businesses with the capacity to support approximately 5,000 employees. With 1 GB of directory object storage, AWS Microsoft AD (Standard Edition) has the capacity to store 30,000 or more total directory objects (users, groups, and computers). AWS Microsoft AD (Standard Edition) also gives you the option to add domain controllers to meet the specific performance demands of your applications. You also can use AWS Microsoft AD (Standard Edition) as a resource forest with a trust relationship to your on-premises directory.

2. How can I use it?

With AWS Microsoft AD (Standard Edition), you can share a single directory for multiple use cases. For example, you can share a directory to authenticate and authorize access for .NET applications, Amazon RDS for SQL Server with Windows Authentication enabled, and Amazon Chime for messaging and video conferencing.

The following diagram shows some of the use cases for your AWS Microsoft AD (Standard Edition) directory, including the ability to grant your users access to external cloud applications and allow your on-premises AD users to manage and have access to resources in the AWS Cloud. Click the diagram to see a larger version.

Diagram showing some ways you can use AWS Microsoft AD (Standard Edition)--click the diagram to see a larger version

Use case 1: Sign in to AWS applications and services with AD credentials

You can enable multiple AWS applications and services such as the AWS Management Console, Amazon WorkSpaces, and Amazon RDS for SQL Server to use your AWS Microsoft AD (Standard Edition) directory. When you enable an AWS application or service in your directory, your users can access the application or service with their AD credentials.

For example, you can enable your users to sign in to the AWS Management Console with their AD credentials. To do this, you enable the AWS Management Console as an application in your directory, and then assign your AD users and groups to IAM roles. When your users sign in to the AWS Management Console, they assume an IAM role to manage AWS resources. This makes it easy for you to grant your users access to the AWS Management Console without needing to configure and manage a separate SAML infrastructure.

Use case 2: Manage Amazon EC2 instances

Using familiar AD administration tools, you can apply AD Group Policy objects (GPOs) to centrally manage your Amazon EC2 for Windows or Linux instances by joining your instances to your AWS Microsoft AD (Standard Edition) domain.

In addition, your users can sign in to your instances with their AD credentials. This eliminates the need to use individual instance credentials or distribute private key (PEM) files. This makes it easier for you to instantly grant or revoke access to users by using AD user administration tools you already use.

Use case 3: Provide directory services to your AD-aware workloads

AWS Microsoft AD (Standard Edition) is an actual Microsoft AD that enables you to run traditional AD-aware workloads such as Remote Desktop Licensing Manager, Microsoft SharePoint, and Microsoft SQL Server Always On in the AWS Cloud. AWS Microsoft AD (Standard Edition) also helps you to simplify and improve the security of AD-integrated .NET applications by using group Managed Service Accounts (gMSAs) and Kerberos constrained delegation (KCD).

Use case 4: SSO to Office 365 and other cloud applications

You can use AWS Microsoft AD (Standard Edition) to provide SSO for cloud applications. You can use Azure AD Connect to synchronize your users into Azure AD, and then use Active Directory Federation Services (AD FS) so that your users can access Microsoft Office 365 and other SAML 2.0 cloud applications by using their AD credentials.

Use case 5: Extend your on-premises AD to the AWS Cloud

If you already have an AD infrastructure and want to use it when migrating AD-aware workloads to the AWS Cloud, AWS Microsoft AD (Standard Edition) can help. You can use AD trusts to connect AWS Microsoft AD (Standard Edition) to your existing AD. This means your users can access AD-aware and AWS applications with their on-premises AD credentials, without needing you to synchronize users, groups, or passwords.

For example, your users can sign in to the AWS Management Console and Amazon WorkSpaces by using their existing AD user names and passwords. Also, when you use AD-aware applications such as SharePoint with AWS Microsoft AD (Standard Edition), your logged-in Windows users can access these applications without needing to enter credentials again.

3. What are the key features?

AWS Microsoft AD (Standard Edition) includes the features detailed in this section.

Extend your AD schema

With AWS Microsoft AD, you can run customized AD-integrated applications that require changes to your directory schema, which defines the structures of your directory. The schema is composed of object classes such as user objects, which contain attributes such as user names. AWS Microsoft AD lets you extend the schema by adding new AD attributes or object classes that are not present in the core AD attributes and classes.

For example, if you have a human resources application that uses employee badge color to assign specific benefits, you can extend the schema to include a badge color attribute in the user object class of your directory. To learn more, see How to Move More Custom Applications to the AWS Cloud with AWS Directory Service.

Create user-specific password policies

With user-specific password policies, you can apply specific restrictions and account lockout policies to different types of users in your AWS Microsoft AD (Standard Edition) domain. For example, you can enforce strong passwords and frequent password change policies for administrators, and use less-restrictive policies with moderate account lockout policies for general users.

Add domain controllers

You can increase the performance and redundancy of your directory by adding domain controllers. This can help improve application performance by enabling directory clients to load-balance their requests across a larger number of domain controllers.

Encrypt directory traffic

You can use AWS Microsoft AD (Standard Edition) to encrypt Lightweight Directory Access Protocol (LDAP) communication between your applications and your directory. By enabling LDAP over Secure Sockets Layer (SSL)/Transport Layer Security (TLS), also called LDAPS, you encrypt your LDAP communications end to end. This helps you to protect sensitive information you keep in your directory when it is accessed over untrusted networks.

Improve the security of signing in to AWS services by using multi-factor authentication (MFA)

You can improve the security of signing in to AWS services, such as Amazon WorkSpaces and Amazon QuickSight, by enabling MFA in your AWS Microsoft AD (Standard Edition) directory. With MFA, your users must enter a one-time passcode (OTP) in addition to their AD user names and passwords to access AWS applications and services you enable in AWS Microsoft AD (Standard Edition).

Get started

To get started, use the Directory Service console to create your first directory with just a few clicks. If you have not used Directory Service before, you may be eligible for a 30-day limited free trial.

Summary

In this blog post, I explained what AWS Microsoft AD (Standard Edition) is and how you can use it. With a single directory, you can address many use cases for your business, making it easier to migrate and run your AD-aware workloads in the AWS Cloud, provide access to AWS applications and services, and connect to other cloud applications. To learn more about AWS Microsoft AD, see the Directory Service home page.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about this blog post, start a new thread on the Directory Service forum.

– Peter