All posts by Mahmoud Matouk

Simplify DNS management in a multi-account environment with Route 53 Resolver

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/simplify-dns-management-in-a-multiaccount-environment-with-route-53-resolver/

In a previous post, I showed you a solution to implement central DNS in a multi-account environment that simplified DNS management by reducing the number of servers and forwarders you needed when implementing cross-account and AWS-to-on-premises domain resolution. With the release of the Amazon Route 53 Resolver service, you now have access to a native conditional forwarder that will simplify hybrid DNS resolution even more.

In this post, I’ll show you a modernized solution to centralize DNS management in a multi-account environment by using Route 53 Resolver. This solution allows you to resolve domains across multiple accounts and between workloads running on AWS and on-premises without the need to run a domain controller in AWS.

Solution overview

My solution will show you how to solve three primary use-cases for domain resolution:

  • Resolving on-premises domains from workloads running in your VPCs.
  • Resolving private domains in your AWS environment from workloads running on-premises.
  • Resolving private domains between workloads running in different AWS accounts.

The following diagram explains the high-level full architecture.
 

Figure 1: Solution architecture diagram

Figure 1: Solution architecture diagram

In this architecture:

  1. This is the Amazon-provided default DNS server for the central DNS VPC, which we’ll refer to as the DNS-VPC. This is the second IP address in the VPC CIDR range (as illustrated, this is 172.27.0.2). This default DNS server will be the primary domain resolver for all workloads running in participating AWS accounts.
  2. This shows the Route 53 Resolver endpoints. The inbound endpoint will receive queries forwarded from on-premises DNS servers and from workloads running in participating AWS accounts. The outbound endpoint will be used to forward domain queries from AWS to on-premises DNS.
  3. This shows conditional forwarding rules. For this architecture, we need two rules, one to forward domain queries for onprem.private zone to the on-premises DNS server through the outbound gateway, and a second rule to forward domain queries for awscloud.private to the resolver inbound endpoint in DNS-VPC.
  4. This indicates that these two forwarding rules are shared with all other AWS accounts through AWS Resource Access Manager and are associated with all VPCs in these accounts.
  5. This shows the private hosted zone created in each account with a unique subdomain of awscloud.private.
  6. This shows the on-premises DNS server with conditional forwarders configured to forward queries to the awscloud.private zone to the IP addresses of the Resolver inbound endpoint.

Note: This solution doesn’t require VPC-peering or connectivity between the source/destination VPCs and the DNS-VPC.

How it works

Now, I’m going to show how the domain resolution flow of this architecture works according to the three use-cases I’m focusing on.

First use case

 

 Figure 2:  Use case for resolving on-premises domains from workloads running in AWS

Figure 2: Use case for resolving on-premises domains from workloads running in AWS

First, I’ll look at resolving on-premises domains from workloads running in AWS. If the server with private domain host1.acc1.awscloud.private attempts to resolve the address host1.onprem.private, here’s what happens:

  1. The DNS query will route to the default DNS server of the VPC that hosts host1.acc1.awscloud.private
  2. Because the VPC is associated with the forwarding rules shared from the central DNS account, these rules will be evaluated by the default Amazon-provided DNS in the VPC.
  3. In this example, one of the rules indicates that queries for onprem.private should be forwarded to an on-premises DNS server. Following this rule, the query will be forwarded to an on-premises DNS server.
  4. The forwarding rule is associated with the Resolver outbound endpoint, so the query will be forwarded through this endpoint to an on-premises DNS server.

In this flow, the DNS query that was initiated in one of the participating accounts has been forwarded to the centralized DNS server which, in turn, forwarded this to the on-premises DNS.

Second use case

Next, here’s how on-premises workloads will be able to resolve private domains in your AWS environment:
 

Figure 3: Use case for how on-premises workloads will be able to resolve private domains in your AWS environment

Figure 3: Use case for how on-premises workloads will be able to resolve private domains in your AWS environment

In this case, the query for host1.acc1.awscloud.private is initiated from an on-premises host. Here’s what happens next:

  1. The domain query is forwarded to on-premises DNS server.
  2. The query is then forwarded to the Resolver inbound endpoint via a conditional forwarder rule on the on-premises DNS server.
  3. The query reaches the default DNS server for DNS-VPC.
  4. Because DNS-VPC is associated with the private hosted zone acc1.awscloud.private, the default DNS server will be able to resolve this domain.

In this case, the DNS query has been initiated on-premises and forwarded to centralized DNS on the AWS side through the inbound endpoint.

Third use case

Finally, you might need to resolve domains across multiple AWS accounts. Here’s how you could achieve this:
 

Figure 4: Use case for how to resolve domains across multiple AWS accounts

Figure 4: Use case for how to resolve domains across multiple AWS accounts

Let’s say that host1 in host1.acc1.awscloud.private attempts to resolve the domain host2.acc2.awscloud.private. Here’s what happens:

  1. The domain query is sent to the default DNS server for the VPC hosting source machine (host1).
  2. Because the VPC is associated with the shared forwarding rules, these rules will be evaluated.
  3. A rule indicates that queries for awscloud.private zone should be forwarded to the resolver endpoint in DNS-VPC (for inbound endpoint IP addresses), which will then use the Amazon-provided default DNS to resolve the query.
  4. Because DNS-VPC is associated with the acc2.awscloud.private hosted zone, the default DNS will use auto-defined rules to resolve this domain.

This use case explains the AWS-to-AWS case where the DNS query has been initiated on one participating account and forwarded to central DNS for resolution of domains in another AWS account. Now, I’ll look at what it takes to build this solution in your environment.

How to deploy the solution

I’ll show you how to configure this solution in four steps:

  1. Set up a centralized DNS account.
  2. Set up each participating account.
  3. Create private hosted zones and Route 53 associations.
  4. Configure on-premises DNS forwarders.

Step 1: Set up a centralized DNS account

In this step, you’ll set up resources in the centralized DNS account. Primarily, this includes the DNS-VPC, Resolver endpoints, and forwarding rules.

  1. Create a VPC to act as DNS-VPC according to your business scenario, either using the web console or from an AWS Quick Start. You can review common scenarios in the Amazon VPC user guide; one very common scenarios is a VPC with public and private subnets.
  2. Create resolver endpoints. You need to create an outbound endpoint to forward DNS queries to on-premises DNS and an inbound endpoint to receive DNS queries forwarded from on-premises workloads and other AWS accounts.
  3. Create two forwarding rules. The first rule is to forward DNS queries for zone onprem.private to your on-premises DNS server IP addresses, and the second rule is to forward DNS queries for zone awscloud.private to the IP addresses of the resolver inbound endpoint.
  4. After creating the rules, associate them with DNS-VPC that was created in step #1. This will allow the Route 53 Resolver to start forwarding domain queries accordingly.
  5. Finally, you need to share the two forwarding rules with all participating accounts. To do that, you’ll use AWS Resource Access Manager and you can share the rules with your entire AWS Organization or with specific accounts.

Note: To be able to forward domain queries to your on-premises DNS server, you need connectivity between your data center and DNS-VPC, which could be established either using site-to-site VPN or AWS Direct Connect.

Step 2: Set up participating accounts

For each participating account, you need to configure your VPCs to use the shared forwarding rules, and you need to create a private hosted zone for each account.

  • Accept the shared rules from AWS Resource Access Manager. This step is not required if the rules were shared to your AWS Organization. Then, associate the forwarding rules with the VPCs that host your workloads in each account. Once associated, the resolver will start forwarding DNS queries according to the rules.

At this point, you should be able to resolve on-premises domains from workloads running in any VPC associated with the shared forwarding rules. To create private domains in AWS, you need to create Private Hosted Zones.

Step 3: Create private hosted zones

In this step, you need to create a private hosted zone in each account with a subdomain of awscloud.private. Use unique names for each private hosted zone to avoid domain conflicts in your environment (for example, acc1.awscloud.private or dev.awscloud.private).

  1. Create a private hosted zone in each participating account with a subdomain of awscloud.private and associate it with VPCs running in that account.
  2. Associate the private hosted zone with DNS-VPC. This allows the centralized DNS-VPC to resolve domains in the private hosted zone and act as a DNS resolver between AWS accounts.

Because the private hosted zone and DNS-VPC are in different accounts, you need to associate the private hosted zone with DNS-VPC. To do that, you need to create authorization from the account that owns the private hosted zone and accept this authorization from the account that owns DNS-VPC. You can do that using AWS CLI:

  1. In each participating account, create the authorization using the private hosted zone ID, the region, and the VPC ID that you want to associate (DNS-VPC).
    
        aws route53 create-vpc-association-authorization --hosted-zone-id <hosted-zone-id>  --vpc VPCRegion=<region> ,VPCId=<vpc-id>    
    

  2. In the centralized DNS account, associate the DNS-VPC with the hosted zone in each participating account.
    
        aws route53 associate-vpc-with-hosted-zone --hosted-zone-id <hosted-zone-id> --vpc VPCRegion=<region>,VPCId=<vpc-id>    
    

Step 4: Configure on-premises DNS forwarders

To be able to resolve subdomains within the awscloud.private domain from workloads running on-premises, you need to configure conditional forwarding rules to forward domain queries to the two IP addresses of resolver inbound endpoints that were created in the central DNS account. Note that this requires connectivity between your data center and DNS-VPC, which could be established either using site-to-site VPN or
AWS Direct Connect.

Additional considerations and limitations

Thanks to the flexibility of Route 53 Resolver and conditional forwarding rules, you can control which queries to send to central DNS and which ones to resolve locally in the same account. This is particularly important when you plan to use some AWS services, such as AWS PrivateLink or Amazon Elastic File System (EFS) because domain names associated with these services need to be resolved local to the account that owns them. In this section, I will name two use-cases that require additional considerations.

  1. Interface VPC Endpoints (AWS PrivateLink)

    When you create an AWS PrivateLink interface endpoint, AWS generates endpoint-specific DNS hostnames that you can use to communicate with the service. For AWS services and AWS Marketplace partner services, you can optionally enable private DNS for the endpoint. This option associates a private hosted zone with your VPC. The hosted zone contains a record set for the default DNS name for the service (for example, ec2.us-east-1.amazonaws.com) that resolves to the private IP addresses of the endpoint network interfaces in your VPC. This enables you to make requests to the service using its default DNS hostname instead of the endpoint-specific DNS hostnames.

    If you use private DNS for your endpoint, you have to resolve DNS queries to the endpoint local to the account and use the default DNS provided by AWS. So, in this case, I recommend that you resolve domain queries in amazonaws.com locally and not forward these queries to central DNS.

  2. Mounting EFS with a DNS name

    You can mount an Amazon EFS file system on an Amazon EC2 instance using DNS names. The file system DNS name automatically resolves to the mount target’s IP address in the Availability Zone of the connecting Amazon EC2 instance. To be able to do that, the VPC must use the default DNS provided by Amazon to resolve EFS DNS names.

    If you plan to use EFS in your environment, I recommend that you resolve EFS DNS names locally and avoid sending these queries to central DNS because clients in that case would not receive answers optimized for their availability zone, which might result in higher operation latencies and less durability.

Summary

In this post, I introduced a simplified solution to implement central DNS resolution in a multi-account and hybrid environment. This solution uses AWS Route 53 Resolver, AWS Resource Access Manager, and native Route 53 capabilities and it reduces complexity and operations effort by removing the need for custom DNS servers or forwarders in AWS environment.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on in the AWS forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is part of our world-wide public sector Solutions Architects, helping higher education customers build innovative, secured, and highly available solutions using various AWS services.

Guidelines for protecting your AWS account while using programmatic access

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/guidelines-for-protecting-your-aws-account-while-using-programmatic-access/

One of the most important things you can do as a customer to ensure the security of your resources is to maintain careful control over who has access to them. This is especially true if any of your AWS users have programmatic access. Programmatic access allows you to invoke actions on your AWS resources either through an application that you write or through a third-party tool. You use an access key ID and a secret access key to sign your requests for authorization to AWS. Programmatic access can be quite powerful, so implementing best practices to protect access key IDs and secret access keys is important in order to prevent accidental or malicious account activity. In this post, I’ll highlight some general guidelines to help you protect your account, as well as some of the options you have when you need to provide programmatic access to your AWS resources.

Protect your root account

Your AWS root account—the account that’s created when you initially sign up with AWS—has unrestricted access to all your AWS resources. There’s no way to limit permissions on a root account. For this reason, AWS always recommends that you do not generate access keys for your root account. This would give your users the power to do things like close the entire account—an ability that they probably don’t need. Instead, you should create individual AWS Identity and Access Management (IAM) users, then grant each user permissions based on the principle of least privilege: Grant them only the permissions required to perform their assigned tasks. To more easily manage the permissions of multiple IAM users, you should assign users with the same permissions to an IAM group.

Your root account should always be protected by Multi-Factor Authentication (MFA). This additional layer of security helps protect against unauthorized logins to your account by requiring two factors: something you know (a password) and something you have (for example, an MFA device). AWS supports virtual and hardware MFA devices, U2F security keys, and SMS text message-based MFA.

Decide how to grant access to your AWS account

To allow users access to the AWS Management Console and AWS Command Line Interface (AWS CLI), you have two options. The first one is to create identities and allow users to log in using a username and password managed by the IAM service. The second approach is to use federation
to allow your users to use their existing corporate credentials to log into the AWS console and CLI.

Each approach has its use cases. Federation is generally better for enterprises that have an existing central directory or plan to need more than the current limit of 5,000 IAM users.

Note: Access to all AWS accounts is managed by AWS IAM. Regardless of the approach you choose, make sure to familiarize yourself with and follow IAM best practices.

Decide when to use access keys

Applications running outside of an AWS environment will need access keys for programmatic access to AWS resources. For example, monitoring tools running on-premises and third-party automation tools will need access keys.

However, if the resources that need programmatic access are running inside AWS, the best practice is to use IAM roles instead. An IAM role is a defined set of permissions—it’s not associated with a specific user or group. Instead, any trusted entity can assume the role to perform a specific business task.

By utilizing roles, you can grant a resource access without hardcoding an access key ID and secret access key into the configuration file. For example, you can grant an Amazon Elastic Compute Cloud (EC2) instance access to an Amazon Simple Storage Service (Amazon S3) bucket by attaching a role with a policy that defines this access to the EC2 instance. This approach improves your security, as IAM will dynamically manage the credentials for you with temporary credentials that are rotated automatically.

Grant least privileges to service accounts

If you decided to create service accounts (that is, accounts used for programmatic access by applications running outside of the AWS environment) and generate access keys for them, you should create a dedicated service account for each use case. This will allow you to restrict the associated policy to only the permissions needed for the particular use case, limiting the blast radius if the credentials are compromised. For example, if a monitoring tool and a release management tool both require access to your AWS environment, create two separate service accounts with two separate policies that define the minimum set of permissions for each tool.

In addition to this, it’s also a best practice to add conditions to the policy that further restrict access—such as restricting access to only the source IP address range of your clients.

Below is an example policy that represents least privileges. It grants the needed permissions (PutObject) on to a specific resource (an S3 bucket named “examplebucket”) while adding further conditions (the client must come from IP range 203.0.113.0/24).


{
    "Version": "2012-10-17",
    "Id": "S3PolicyRestrictPut",
    "Statement": [
            {
            "Sid": "IPAllow",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::examplebucket/*",
            "Condition": {
                "IpAddress": {"aws:SourceIp": "203.0.113.0/24"}
            } 
        } 
    ]
}

Use temporary credentials from AWS STS

AWS Security Token Service (AWS STS) is a web service that enables you to request temporary credentials for use in your code, CLI, or third-party tools. It allows you to assume an IAM role with which you have a trusted relationship and then generate temporary, time-limited credentials based on the permissions associated with the role. These credentials can only be used during the validity period, which reduces your risk.

There are two ways to generate temporary credentials. You can generate them from the CLI, which is helpful when you need credentials for testing from your local machine or from an on-premises or third-party tool. You can also generate them from code using one of the AWS SDKs. This approach is helpful if you need credentials in your application, or if you have multiple user types that require different permission levels.

Create temporary credentials using the CLI

If you have access to the AWS CLI, you can use it to generate temporary credentials with limited permissions to use in your local testing or with third-party tools. To be able to use this approach, here’s what you need:

  • Access to the AWS CLI through your primary user account or through federation. To learn how to configure CLI access using your IAM credentials, follow this link. If you use federation, you still can use the CLI by following the instructions in this blog post.
  • An IAM role that represents the permissions needed for your test client. In the example below, I use “s3-read”. This role should have a policy attached that grants the least privileges needed for the use case.
  • A trusted relationship between the service role (“s3-read”) and your user account, to allow you to assume the service role and generate temporary credentials. Visit this link for the steps to create this trust relationship.

The example command below will generate a temporary access key ID and secret access key that are valid for 15 minutes, based on permissions associated with the role named “s3-read”. You can replace the values below with your own account number, service role, and duration, then use the secret access key and access key ID in your local clients.


aws sts assume-role --role-arn <arn:aws:iam::AWS-ACCOUNT-NUMBER:role/s3-read> --role-session-name <s3-access> --duration-seconds <900>

Here are my results from running the command:


{ "AssumedRoleUser": 
    { 
        "AssumedRoleId": "AROAIEGLQIIQUSJ2I5XRM:s3-access", 
        "Arn": "arn:aws:sts::AWS-ACCOUNT-NUMBER:assumed-role/s3-read/s3-access" 
    }, 
    "Credentials": { 
        "SecretAccessKey":"wZJph6PX3sn0ZU4g6yfXdkyXp5m+nwkEtdUHwC3w",  
        "SessionToken": "FQoGZXIvYXdzENr//////////<<REST-OF-TOKEN>>",
        "Expiration": "2018-11-02T16:46:23Z",
        "AccessKeyId": "ASIAXQZXUENECYQBAAQG" 
    } 
  }

Create temporary credentials from your code

If you have an application that already uses the AWS SDK, you can use AWS STS to generate temporary credentials right from the code instead of hard-coding credentials into your configurations. This approach is recommended if you have client-side code that requires credentials, or if you have multiple types of users (for example, admins, power-users, and regular users) since it allows you to avoid hardcoding multiple sets of credentials for each user type.

For more information about using temporary credentials from the AWS SDK, visit this link.

Utilize Access Advisor

The IAM console provides information about when an AWS service was last accessed by different principals. This information is called service last accessed data.

Using this tool, you can view when an IAM user, group, role, or policy last attempted to access services to which they have permissions. Based on this information, you can decide if certain permissions need to be revoked or restricted further.

Make this tool part of your periodic security check. Use it to evaluate the permissions of all your IAM entities and to revoke unused permissions until they’re needed. You can also automate the process of periodic permissions evaluation using Access Advisor APIs. If you want to learn how, this blog post is a good starting point.

Other tools for credentials management

While least privilege access and temporary credentials are important, it’s equally important that your users are managing their credentials properly—from rotation to storage. Below is a set of services and features that can help to securely store, retrieve, and rotate credentials.

AWS Systems Manager Parameter Store

AWS Systems Manager offers a capability called Parameter Store that provides secure, centralized storage for configuration parameters and secrets across your AWS account. You can store plain text or encrypted data like configuration parameters, credentials, and license keys. Once stored, you can configure granular access to specify who can obtain these parameters in your application, adding another layer of security to protect your data.

Parameter store is a good choice for use cases in which you need hierarchical storage for configuration data management across your account. For example, you can store database access credentials (username and password) in parameter store, encrypt them with an encryption key managed by AWS Key Management Service, and grant EC2 instances running your application permissions to read and decrypt those credentials.

For more information on using AWS Systems Manager Parameter Store, visit this link.

AWS Secrets Manager

AWS Secrets Manager is a service that allows you to centrally manage the lifecycle of secrets used in your organization, including rotation, audits, and access control. By enabling you to rotate secrets automatically, Secrets Manager can help you meet your security and compliance requirements. Secrets Manager also offers built-in integration for MySQL, PostgreSQL, and Amazon Aurora on Amazon RDS and can be extended to other services.

For more information about using AWS Secrets Manager to store and retrieve secrets, visit this link.

Amazon Cognito

Amazon Cognito lets you add user registration, sign-in, and access management features to your web and mobile applications.

Cognito can be used as an Identity Provider (IdP), where it stores and maintains users and credentials securely for your applications, or it can be integrated with OpenID Connect, SAML, and other popular web identity providers like Amazon.com.

Using Amazon Cognito, you can generate temporary access credentials for your clients to access AWS services, eliminating the need to store long-term credentials in client applications.

To learn more about using Amazon Cognito as an IdP, visit our developer guide to Amazon Cognito User Pools. If you’re interested in information about using Amazon Cognito with a third party IdP, review our guide to Amazon Cognito Identity Pools (Federated Identities).

AWS Trusted Advisor

AWS Trusted Advisor is a service that provides a real-time review of your AWS account and offers guidance on how to optimize your resources to reduce cost, increase performance, expand reliability, and improve security.

The “Security” section of AWS Trusted Advisor should be reviewed on regular basis to evaluate the health of your AWS account. Currently, there are multiple security specific checks that occur—from IAM access keys that haven’t been rotated to insecure security groups. Trusted Advisor is a tool to help you more easily perform a daily or weekly review of your AWS account.

git-secrets

git-secrets
, available from the AWS Labs GitHub account, helps you avoid committing passwords and other sensitive credentials to a git repository. It scans commits, commit messages, and –no-ff merges to prevent your users from inadvertently adding secrets to your repositories.

Conclusion

In this blog post, I’ve introduced some options to replace long-term credentials in your applications with temporary access credentials that can be generated using various tools and services on the AWS platform. Using temporary credentials can reduce the risk of falling victim to a compromised environment, further protecting your business.

I also discussed the concept of least privilege and provided some helpful services and procedures to maintain and audit the permissions given to various identities in your environment.

If you have questions or feedback about this blog post, submit comments in the Comments section below, or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is part of our world-wide public sector Solutions Architects, helping higher education customers build innovative, secured, and highly available solutions using various AWS services.

Author

Joe Chapman

Joe is a Solutions Architect with Amazon Web Services. He primarily serves AWS EdTech customers, providing architectural guidance and best practice recommendations for new and existing workloads. Outside of work, he enjoys spending time with his wife and dog, and finding new adventures while traveling the world.

How to centralize DNS management in a multi-account environment

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-centralize-dns-management-in-a-multi-account-environment/

In a multi-account environment where you require connectivity between accounts, and perhaps connectivity between cloud and on-premises workloads, the demand for a robust Domain Name Service (DNS) that’s capable of name resolution across all connected environments will be high.

The most common solution is to implement local DNS in each account and use conditional forwarders for DNS resolutions outside of this account. While this solution might be efficient for a single-account environment, it becomes complex in a multi-account environment.

In this post, I will provide a solution to implement central DNS for multiple accounts. This solution reduces the number of DNS servers and forwarders needed to implement cross-account domain resolution. I will show you how to configure this solution in four steps:

  1. Set up your Central DNS account.
  2. Set up each participating account.
  3. Create Route53 associations.
  4. Configure on-premises DNS (if applicable).

Solution overview

In this solution, you use AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) as a DNS service in a dedicated account in a Virtual Private Cloud (DNS-VPC).

The DNS service included in AWS Managed Microsoft AD uses conditional forwarders to forward domain resolution to either Amazon Route 53 (for domains in the awscloud.com zone) or to on-premises DNS servers (for domains in the example.com zone). You’ll use AWS Managed Microsoft AD as the primary DNS server for other application accounts in the multi-account environment (participating accounts).

A participating account is any application account that hosts a VPC and uses the centralized AWS Managed Microsoft AD as the primary DNS server for that VPC. Each participating account has a private, hosted zone with a unique zone name to represent this account (for example, business_unit.awscloud.com).

You associate the DNS-VPC with the unique hosted zone in each of the participating accounts, this allows AWS Managed Microsoft AD to use Route 53 to resolve all registered domains in private, hosted zones in participating accounts.

The following diagram shows how the various services work together:
 

Diagram showing the relationship between all the various services

Figure 1: Diagram showing the relationship between all the various services

 

In this diagram, all VPCs in participating accounts use Dynamic Host Configuration Protocol (DHCP) option sets. The option sets configure EC2 instances to use the centralized AWS Managed Microsoft AD in DNS-VPC as their default DNS Server. You also configure AWS Managed Microsoft AD to use conditional forwarders to send domain queries to Route53 or on-premises DNS servers based on query zone. For domain resolution across accounts to work, we associate DNS-VPC with each hosted zone in participating accounts.

If, for example, server.pa1.awscloud.com needs to resolve addresses in the pa3.awscloud.com domain, the sequence shown in the following diagram happens:
 

How domain resolution across accounts works

Figure 2: How domain resolution across accounts works

 

  • 1.1: server.pa1.awscloud.com sends domain name lookup to default DNS server for the name server.pa3.awscloud.com. The request is forwarded to the DNS server defined in the DHCP option set (AWS Managed Microsoft AD in DNS-VPC).
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.

Similarly, if server.example.com needs to resolve server.pa3.awscloud.com, the following happens:

  • 2.1: server.example.com sends domain name lookup to on-premise DNS server for the name server.pa3.awscloud.com.
  • 2.2: on-premise DNS server using conditional forwarder forwards domain lookup to AWS Managed Microsoft AD in DNS-VPC.
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.

Step 1: Set up a centralized DNS account

In previous AWS Security Blog posts, Drew Dennis covered a couple of options for establishing DNS resolution between on-premises networks and Amazon VPC. In this post, he showed how you can use AWS Managed Microsoft AD (provisioned with AWS Directory Service) to provide DNS resolution with forwarding capabilities.

To set up a centralized DNS account, you can follow the same steps in Drew’s post to create AWS Managed Microsoft AD and configure the forwarders to send DNS queries for awscloud.com to default, VPC-provided DNS and to forward example.com queries to the on-premise DNS server.

Here are a few considerations while setting up central DNS:

  • The VPC that hosts AWS Managed Microsoft AD (DNS-VPC) will be associated with all private hosted zones in participating accounts.
  • To be able to resolve domain names across AWS and on-premises, connectivity through Direct Connect or VPN must be in place.

Step 2: Set up participating accounts

The steps I suggest in this section should be applied individually in each application account that’s participating in central DNS resolution.

  1. Create the VPC(s) that will host your resources in participating account.
  2. Create VPC Peering between local VPC(s) in each participating account and DNS-VPC.
  3. Create a private hosted zone in Route 53. Hosted zone domain names must be unique across all accounts. In the diagram above, we used pa1.awscloud.com / pa2.awscloud.com / pa3.awscloud.com. You could also use a combination of environment and business unit: for example, you could use pa1.dev.awscloud.com to achieve uniqueness.
  4. Associate VPC(s) in each participating account with the local private hosted zone.

The next step is to change the default DNS servers on each VPC using DHCP option set:

  1. Follow these steps to create a new DHCP option set. Make sure in the DNS Servers to put the private IP addresses of the two AWS Managed Microsoft AD servers that were created in DNS-VPC:
     
    The "Create DHCP options set" dialog box

    Figure 3: The “Create DHCP options set” dialog box

     

  2. Follow these steps to assign the DHCP option set to your VPC(s) in participating account.

Step 3: Associate DNS-VPC with private hosted zones in each participating account

The next steps will associate DNS-VPC with the private, hosted zone in each participating account. This allows instances in DNS-VPC to resolve domain records created in these hosted zones. If you need them, here are more details on associating a private, hosted zone with VPC on a different account.

  1. In each participating account, create the authorization using the private hosted zone ID from the previous step, the region, and the VPC ID that you want to associate (DNS-VPC).
     
    aws route53 create-vpc-association-authorization –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     
  2. In the centralized DNS account, associate DNS-VPC with the hosted zone in each participating account.
     
    aws route53 associate-vpc-with-hosted-zone –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     

After completing these steps, AWS Managed Microsoft AD in the centralized DNS account should be able to resolve domain records in the private, hosted zone in each participating account.

Step 4: Setting up on-premises DNS servers

This step is necessary if you would like to resolve AWS private domains from on-premises servers and this task comes down to configuring forwarders on-premise to forward DNS queries to AWS Managed Microsoft AD in DNS-VPC for all domains in the awscloud.com zone.

The steps to implement conditional forwarders vary by DNS product. Follow your product’s documentation to complete this configuration.

Summary

I introduced a simplified solution to implement central DNS resolution in a multi-account environment that could be also extended to support DNS resolution between on-premise resources and AWS. This can help reduce operations effort and the number of resources needed to implement cross-account domain resolution.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Directory Service forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.