Tag Archives: RDS for SQL Server

Automate ETL jobs between Amazon RDS for SQL Server and Azure Managed SQL using AWS Glue Studio

Post Syndicated from Daniel Maldonado original https://aws.amazon.com/blogs/big-data/automate-etl-jobs-between-amazon-rds-for-sql-server-and-azure-managed-sql-using-aws-glue-studio/

Nowadays many customers are following a multi-cloud strategy. They might choose to use various cloud-managed services, such as Amazon Relational Database Service (Amazon RDS) for SQL Server and Azure SQL Managed Instances, to perform data analytics tasks, but still use traditional extract, transform, and load (ETL) tools to integrate and process the data. However, traditional ETL tools may require you to develop custom scripts, which makes ETL automation difficult.

In this post, I show you how to automate ETL jobs between Amazon RDS for SQL Server and Azure SQL Managed Instances using AWS Glue Studio, which is part of AWS Glue, a fully managed serverless integration service. AWS Glue Studio has a graphical interface that makes it easy to create, run, and monitor ETL jobs, and you can create a schedule to run your jobs at specific times.

Solution overview

To move data from one database to another, there are different services available either on-premise or in the cloud, varied by bandwidth limits, ongoing changes (CDC), schema and table modifications, and other features. Beyond that, we need to apply advanced data transformations, monitor, and automate the ETL jobs. This is where AWS Glue Studio can help us facilitate these activities.

As shown in the following diagram, we use AWS Glue Studio as the middleware to pull data from the source database (in this case an Azure SQL Managed Instance), then create and automate the ETL job using one of the pre-built transformations in AWS Glue Studio. Finally, we load the data to the target database (in this case an RDS for SQL Server instance).

The solution workflow consists of the following steps:

  1. Create connections for the source and target databases.
  2. Create and run AWS Glue crawlers.
  3. Create and run an ETL job that transforms the data and loads it from source to target.
  4. Schedule the ETL job to run automatically.
  5. Monitor the ETL job.

Prerequisites

Complete the following prerequisite steps:

  1. Install SQL Server Management Studio (SSMS) or an equivalent client tool.
  2. Set up a VPN connection between Amazon Virtual Private Cloud (Amazon VPC) and the Azure private subnet.
  3. Create a security group for AWS Glue ENI in your VPC.
  4. Create an AWS Identity and Access Management (IAM) role for AWS Glue. For instructions, refer to Setting up IAM permissions for AWS Glue.
  5. Open the appropriate firewall ports in the Azure private subnet.
  6. Create a source database table (Azure SQL Managed Instance). You can deploy the Azure database instance using the following QuickStart. For testing purposes, I import the public AdventureWorks sample database and use the dbo.Employee table. See the following code:
    #Query table
    SELECT * FROM [AdventureWorksLT2019].[dbo].[Employee]

  7. Create the target database table (Amazon RDS for SQL Server). To deploy the RDS instance, refer to Create and Connect to a Microsoft SQL Server Database with Amazon RDS. You can create an empty database and table with the following statements. This is the table where the data coming from Azure will be stored.
#Create database
CREATE DATABASE AdventureWorksonRDS;
#Create table
CREATE TABLE Employee
(EmpID INT NOT NULL,
EmpName VARCHAR(50) NOT NULL,
Designation VARCHAR(50) NULL,
Department VARCHAR(50) NULL,
JoiningDate DATETIME NULL,
CONSTRAINT [PK_Employee] PRIMARY KEY CLUSTERED (EmpID)

#Query table
SELECT * FROM [AdventureWorksonRDS].[dbo].[Employee]

Create connections

The first step is to populate our AWS Glue Data Catalog with the schema information coming from our source and target data sources.

To do that, we first create connections. A connection is a Data Catalog object that stores connection information for a particular data store. Connections store login credentials, URI strings, VPC information, and more. Creating connections in the Data Catalog saves the effort of having to specify the connection details every time you create a crawler or job.

Create a connection for Azure SQL Managed Instance

To create the connection to our source database, complete the following steps:

  1. On the AWS Glue console, choose AWS Glue Studio.
  2. In the navigation pane of the AWS Glue Studio console, choose Connectors.
  3. Choose Create connection.
  4. For Name, enter AzureSQLManaged.
  5. For Connection type, choose JDBC.
  6. For JDBC URL, use the SQL Server syntax jdbc:protocol://host:port;database=db_name.

You can find the host and database name on the Azure SQL Managed Instance service console, on the Overview page.For this specific example, we use the following information for our Azure SQL Instance:

    • Protocolsqlserver
    • Hostadi-qa-sql-managed-instance-test.public.xxxxxxxxxxxx.database.windows.net
    • Port3342
    • Database nameAdventureWorksLT2019

Enter your user name and password.
Choose Create connection.

Create a connection for Amazon RDS for SQL Server

To create a connection for our target data source, complete the following steps:

  1. On the AWS Glue Studio console, choose Connectors in the navigation pane.
  2. Choose Create connection.
  3. For Name, enter AWSRDSforSQL.
  4. For Connection type, choose Amazon RDS.
  5. For Database engine, choose Microsoft SQL Server.
  6. For Database instances, choose your RDS DB instance.
  7. For Database name, enter AdventureWorksonRDS.
  8. Enter your user name and password.
  9. Choose Create connection.

You can now see the two connections created in the Connections section.

Create and run AWS Glue crawlers

You can use a crawler to populate the AWS Glue Data Catalog with tables. This is the most common method used by most AWS Glue users. A crawler can crawl multiple data stores in a single run. Upon completion, it updates the Data Catalog with the tables it found. The ETL jobs that you define in AWS Glue use these Data Catalog tables as sources and targets.

Create a crawler for Azure SQL Managed Instance

To create a crawler for our source database, complete the following steps:

  1. On the AWS Glue console, choose Crawlers in the navigation pane.
  2. Choose Create crawler.
  3. If the data hasn’t been mapped into an AWS Glue table, select Not yet and choose Add a data source.
  4. For Data source¸ choose JDBC.
  5. For Connection, choose AzureSQLManaged.
  6. For Include path, specify the path of the database including the schema: AdventureWorksLT2019/dbo/%.
  7. Choose Add a JDBC data source.                                                                                     
  8. Choose Next.
  9. Choose the IAM role created as part of the prerequisites and choose Next.
  10. Choose Add database to create the target database in the AWS Glue Data Catalog.
  11. For Name, enter azuresqlmanaged_db.
  12. Choose Create database.
  13. For Target database, choose azuresqlmanaged_db.
  14. Choose Next.
  15. Review if everything looks correct and choose Create crawler.

Create a crawler for Amazon RDS for SQL Server

Repeat the crawler creation steps to create the crawler for the target RDS for SQL Server database, using the following information:

  • Crawler name AmazonRDSSQL_Crawler
  • Data source – JDBC
  • Connection AWSRDSforSQL
  • Include path AdventureWorksonRDS/dbo/%
  • IAM role AWSGlueServiceRoleDefault
  • Database name amazonrdssql_db

Run the crawlers

Now it’s time to run the crawlers.

  1. On the AWS Glue console, choose Crawlers in the navigation pane.
  2. Select the crawlers you created and choose Run.
  3. When the crawler is complete, choose Databases in the navigation pane. Here you can find the databases discovered by the crawler.
  4. Choose Tables in the navigation pane and explore the tables discovered by the crawler that correctly identified the data type as SQL Server.
  5. Choose the table adventureworkslt2019_dbo_employee and review the schema created for the data source.

Create and run an ETL job

Now that we have crawled our source and target databases, and we have the data in the AWS Glue Data Catalog, we can create an ETL job to load and transform this data.

  1. On the AWS Glue Studio console, choose Jobs in the navigation pane.
  2. Select Visual with a blank canvas to use a visual interface to create our ETL jobs.
  3. Choose Create.
  4. On the Source menu, choose AWS Glue Data Catalog.
  5. On the Data source properties tab, specify the database and table (for this post, azuresqlmanaged_db and adventureworkslt2019_dbo_employee).
  6. On the Transform menu, choose Apply mapping to map the source fields to the target database.
  7. On the Transform tab, you can see the data fields to be loaded, and you even can drop some of them if needed.
  8. On the Target menu, choose AWS Glue Data Catalog.
  9. On the Data target properties tab, choose the database and table where you want to load the transformed data (for this post, amazonrdssql_db and adventureworksrds_dbo_employee).
  10. On the Job details tab, for Name, enter ETL_Azure_to_AWS.
  11. For IAM Role, choose the appropriate role.
  12. Choose Save.
  13. Choose Run to run the job.

If the ETL job ran successfully, it should map the data from the source database (Azure SQL) to the target database (Amazon RDS for SQL). To confirm it, you can connect to the target database using SQL Server Management Studio (SSMS), and query the empty database/table AdventureWorksonRDS/dbo.Employee. It should have the data coming from the Azure SQL Managed Instance.

#Query table
SELECT * FROM [AdventureWorksonRDS].[dbo].[Employee]

Schedule your ETL job

In AWS Glue Studio, you can create a schedule to have your jobs run at specific times. This will reimport the full dataset and reference the use of bookmarks to do incremental loads. You can schedule your ETL jobs on an hourly, daily, weekly, monthly, or custom basis, depending on your needs. To schedule a job, complete the following steps:

  1. On the AWS Glue Studio, navigate to the job you created.
  2. On the Schedules tab, choose Create schedule.
  3. For Name, enter a name (for example, dbo_employee_daily_load).
  4. Choose your preferred frequency, start hour, and minute of the hour. For this post, we schedule it daily at 3:00 UTC.
  5. For Description, enter an optional description.
  6. Choose Create schedule.

Confirm on the Schedules tab that the schedule was created and activated successfully.

You have now automated your ETL job to run at your desired frequency.

Monitor your ETL job

The job monitoring dashboard provides an overall summary of the job runs, with totals for the jobs with a status of Running, Canceled, Success, or Failed.

The Runs tab shows the jobs for the specified date range and filters. You can filter the jobs on additional criteria, such as status, worker type, job type, and job name.

Conclusion

In this post, I went through the steps to automate ETL jobs using AWS Glue Studio, which is a user-friendly graphical interface to perform data integration tasks such as discovering and extracting data from various sources; enriching, cleaning, normalizing, and combining data; and loading and organizing data in databases, data warehouses, and data lakes. You can easily find and access this data using the AWS Glue Data Catalog. Data engineers and ETL developers can visually create, run, and monitor ETL workflows with a few clicks in AWS Glue Studio.


About the author

Daniel Maldonado is an AWS Solutions Architect, specializing in Microsoft workloads and big data technologies, and focused on helping customers migrate their applications and data to AWS. Daniel has over 13 years of experience working with information technologies and enjoys helping clients reap the benefits of running their workloads in the cloud.

New – Amazon RDS Custom for SQL Server Is Generally Available

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-rds-custom-for-sql-server-is-generally-available/

On October 26, 2021, we launched Amazon RDS Custom for Oracle, a managed database service for applications that require customization of the underlying operating system and database environment. RDS Custom lets you access and customize your database server host and operating system, for example, by applying special patches and changing the database software settings to support third-party applications that require privileged access.

Today, I am happy to announce the general availability of Amazon RDS Custom for SQL Server to support applications that have dependencies on specific configurations and third-party applications that require customizations in corporate, e-commerce, and content management systems, such as Microsoft SharePoint.

With RDS Custom for SQL Server, you can enable features that require elevated privileges like SQL Common Language Runtime (CLR), install specific drivers to enable heterogenous linked servers, or have more than 100 databases per instance.

Through the time-saving benefits of a managed service, RDS Custom for SQL Server frees you up to focus on more business-impacting, strategic activities. The use of automating backups and other operational tasks let you rest easy, knowing your data is safe and ready to be recovered if needed.

Getting Started with RDS Custom for SQL Server
Get started by creating a DB instance of RDS Custom for SQL Server from an orderable engine version offered by RDS Custom. You can optionally access the server host to customize your software via AWS Systems Manager or a remote desktop client. Your application connects to the RDS Custom DB instance endpoint.

Before creating and connecting your custom DB instance for SQL Server, make sure that you meet some prerequisites, such as configuring the AWS Identity and Access Management (IAM) role and Amazon Virtual Private Cloud  (Amazon VPC).

Choose Create database in the Databases menu to create your custom DB instance for SQL Server in the RDS Console. When you choose a database creation method, select Standard create. You can set Engine options to Microsoft SQL Server and choose Amazon RDS Custom in the database management type.

For Edition, choose the DB engine edition that you want to use in the choices of Enterprise, Standard, and Web with the Version of default SQL Server 2019.

For Settings, enter your favorite unique name for the DB instance identifier and your master username and password. By default, the new instance uses an automatically generated password for the master user.

In DB instance size, choose a DB instance class optimized to each DB engine edition.

SQL Server edition RDS Custom support
Enterprise Edition db.r5.xlarge – db.r5.24xlarge
db.m5.xlarge – db.m5.24xlarge
Standard Edition db.r5.large – db.r5.24xlarge
db.m5.large – db.m5.24xlarge
Web Edition db.r5.large – db.r5.4xlarge
db.m5.large – db.m5.4xlarge

See Settings for DB instances in the Amazon RDS User Guide to learn more about the remaining settings. Choose Create database. After creating the DB instance, the details for the new RDS Custom DB instance appear on the RDS console.

Alternatively, you can create an RDS Custom DB instance by using the create-db-instance command in the AWS Command Line Interface (AWS CLI).

$ aws rds create-db-instance \
	--engine custom-sqlserver-se \
	--engine-version 15.00.4073.23.v1 \
	--db-instance-identifier channy-custom-db \
	--db-instance-class db.m5.xlarge \
	--allocated-storage 20 \
	--db-subnet-group mydbsubnetgroup \
	--master-username myuser \
	--master-user-password mypassword \
	--backup-retention-period 3 \
	--no-multi-az \
	--port 8200 \
	--kms-key-id mykmskey \
	--custom-iam-instance-profile AWSRDSCustomInstanceProfile

After you create your RDS Custom DB instance, you can connect to it using AWS Systems Manager Session Manager or an RDP client. Make sure that the Amazon VPC security group associated with your DB instance permits inbound connections on port 3389 for TCP to allow RDP connections.

You need the key pair associated with the instance to connect to the custom DB instance via RDP. RDS Custom creates the key pair for you. The pair name uses the prefix do-not-delete-rds-custom-DBInstanceIdentifier. AWS Secrets Manager stores your private key as a secret. Choose the secret that has the same name as your key pair and retrieve the secret value to decrypt the password later.

In the EC2 console, look for the name of your EC2 instance, and then choose the instance ID associated with your DB instance ID, for example, channy-custom-db-*. Select your custom DB instance, and then choose Connect. On the Connect to instance page, choose the RDP client tab, and then choose Get password with your private key as a secret.

When you connect an RDP client with a downloaded remote desktop file and decrypted password, you can log in to the Windows Server and customize your SQL Server.

You can use AWS Systems Manager Session Manager to start a session with an instance in your account. After the session is started, you can run PowerShell commands as you would for any other connection type. See Connect to your Windows instance in the Amazon EC2 User Guide for more information.

Things to Know
Here are a couple of things to keep in mind about managing your DB instance:

Pausing RDS Custom Automation: RDS Custom for SQL Server automatically provides monitoring and instance recovery for your RDS Custom DB instance. If you need to customize the instance, then pause RDS Custom automation for a specified period. The pause makes sure that your customizations don’t interfere with RDS Custom automation. To pause or resume RDS Custom automation, you can set RDS Custom automation mode to Paused with the pause duration that you want (in minutes, default 60 minutes to 1,440 minutes maximum).

High Availability (HA): To support replication between RDS Custom for SQL Server instances, you can configure HA with Always On Availability Groups (AGs). We recommend that you set up the primary DB instance to synchronously replicate data to the standby instances in different Availability Zones (AZs) to be resilient to AZ failures. Moreover, you can migrate data by configuring HA for your on-premises instance and then failing over or switching over to the RDS Custom standby database.

Custom DB Management: Just like Amazon RDS, RDS Custom for SQL Server creates automated backups taking a snapshot of an Amazon RDS DB instance. Incremental snapshots are used to restore DB instances to a specific point in time. Furthermore, all changes and customizations to the underlying operating system are automatically logged for audit purposes using Systems Manager and AWS CloudTrail. See Troubleshooting an Amazon RDS Custom for DB instance in the Amazon RDS User Guide to learn more.

Available Now
Amazon RDS Custom for SQL Server is now available in the US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), EU (Frankfurt), EU (Ireland), and EU (Stockholm) Regions.

Look at the product page and documentation of Amazon RDS Custom to learn more. Please send us feedback either in the AWS forum for Amazon RDS or through your usual AWS support contacts.

Channy

Better performance for less: AWS continues to beat Azure on SQL Server price/performance

Post Syndicated from Fred Wurden original https://aws.amazon.com/blogs/compute/sql-server-runs-better-on-aws/

By Fred Wurden, General Manager, AWS Enterprise Engineering (Windows, VMware, RedHat, SAP, Benchmarking)

AWS R5b.8xlarge delivers better performance at lower cost than Azure E64_32s_v4 for a SQL Server workload

In this blog, we will review a recent benchmark that Principled Technologies published on 2/25. The benchmark found that an Amazon Elastic Compute Cloud (Amazon EC2) R5b.8xlarge instance delivered better performance for a SQL Server workload at a lower cost when directly tested against an Azure E64_32s_v4 VM.

Behind the study: Understanding how SQL Server performed better, for a lower cost with an AWS EC2 R5b instance

Principled Technologies tested an online transaction processing (OLTP) workload for SQL Server 2019 on both an R5b instance on Amazon EC2 with Amazon Elastic Block Store (EBS) as storage and Azure E64_32s_v4. This particular Azure VM was chosen as an equivalent to the R5b instance, as both instances have comparable specifications for input/output operations per second (IOPS) performance, use Intel Xeon processors from the same generation (Cascade Lake), and offer the same number of cores (32). For storage, Principled Technologies mirrored storage configurations across the Azure VM and the EC2 instance (which used Amazon Elastic Block Store (EBS)), maxing out the IOPS specs on each while offering a direct comparison between instances.

Test Configurations

Source: Principled Technologies

When benchmarking, Principled Technologies ran a TPC-C-like OLTP workload from HammerDB v3.3 on both instances, testing against new orders per minute (NOPM) performance. NOPM shows the number of new-order transactions completed in one minute as part of a serialized business workload. HammerDB claims that because NOPM is “independent of any particular database implementation [it] is the recommended primary metric to use.”

The results: SQL Server on AWS EC2 R5b delivered 2x performance than the Azure VM and 62% less expensive 

Graphs that show AWS instance outperformed the Azure instance

Source: Principled Technologies

These test results from the Principled Technologies report show the price/performance and performance comparisons. The performance metric is New Orders Per Minute (NOPM); faster is better. The price/performance calculations are based on the cost of on-demand, License Included SQL Server instances and storage to achieve 1,000 NOPM performance, smaller is better.

An EC2 r5b.8xlarge instance powered by an Intel Xeon Scalable processor delivered better SQL Server NOPM performance on the HammerDB benchmark and a lower price per 1,000 NOPM than an Azure E64_32s_v4 VM powered by similar Intel Xeon Scalable processors.

On top of that, AWS’s storage price-performance exceeded Azure’s. The Azure managed disks offered 53 percent more storage than the EBS storage, but the EC2 instance with EBS storage cost 24 percent less than the Azure VM with managed disks. Even by reducing Azure storage by the difference in storage, something customers cannot do, EBS would have cost 13 percent less per storage GB than the Azure managed disks.

Why AWS is the best cloud to run your Windows and SQL Server workloads

To us, these results aren’t surprising. In fact, they’re in line with the success that customers find running Windows on AWS for over 12 years. Customers like Pearson and Expedia have all found better performance and enhanced cost savings by moving their Windows, SQL Server, and .NET workloads to AWS. In fact, RepricerExpress migrated its Windows and SQL Server environments from Azure to AWS to slash outbound bandwidth costs while gaining agility and performance.

Not only do we offer better price-performance for your Windows workloads, but we also offer better ways to run Windows in the cloud. Whether you want to rehost your databases to EC2, move to managed with Amazon Relational Database for SQL Server (RDS), or even modernize to cloud-native databases, AWS stands ready to help you get the most out of the cloud.

 


To learn more on migrating Windows Server or SQL Server, visit Windows on AWS. For more stories about customers who have successfully migrated and modernized SQL Server workloads with AWS, visit our Customer Success page. Contact us to start your migration journey today.

Use a single AWS Managed Microsoft AD for Amazon RDS for SQL Server instances in multiple Regions

Post Syndicated from Jeremy Girven original https://aws.amazon.com/blogs/security/use-a-single-aws-managed-microsoft-ad-for-amazon-rds-for-sql-server-instances-in-multiple-regions/

Many Amazon Web Services (AWS) customers use Active Directory to centralize user authentication and authorization for a variety of applications and services. For these customers, Active Directory is a critical piece of their IT infrastructure.

AWS offers AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, to provide a highly accessible and resilient Active Directory service that is built on Microsoft Active Directory.

AWS also offers Amazon Relational Database Service (Amazon RDS) for SQL Server. Amazon RDS enables you to prioritize application development by managing time-consuming database administration tasks including provisioning, backups, software patching, monitoring, and hardware scaling. If you require Windows authentication with Amazon RDS for SQL Server, Amazon RDS for SQL Server instances need to be integrated with AWS Managed Microsoft AD.

With the release of AWS Managed Microsoft AD cross-Region support, you only need one distinct AWS Managed Microsoft AD that spans multiple AWS Regions; this simplifies directory management and configuration. Additionally, it simplifies trusts between the AWS Managed Microsoft AD domain and your on-premises domain. Now, only a single trust between your on-premises domain and AWS Managed Microsoft AD domain is required, as compared to the previous pattern of only one AWS Managed Microsoft AD per Region—each of which would require a trust if you wanted to allow on-premises objects access to your AWS Managed Microsoft AD domain. Further, AWS Managed Microsoft AD cross-Region support provides an additional benefit when using your on-premises users and groups with Amazon RDS for SQL Server: You only need a single, one-way, outgoing trust between your multi-Region AWS Managed Microsoft AD and your on-premises domain.

As detailed in this post, to enable AWS Managed Microsoft AD cross-Region support, you create a new AWS Managed Microsoft AD and extend it to multiple Regions (as shown in Figure 1 below). Once you’ve extended your directory, you deploy an Amazon RDS SQL Server instance in each Region, integrating it to the same directory. Finally, you install SQL Server Management Studio (SSMS) on an instance joined to the AWS Managed Microsoft AD directory. You use that instance to connect to the RDS SQL Server instances using the same domain user account.

Figure 1: High level diagram of resources deployed in this post

Figure 1: High level diagram of resources deployed in this post

The architecture in Figure 1 includes a network connection between the Regions. That connection isn’t required for the AWS Managed Microsoft AD to function. If you don’t require network connectivity between your regions, you can disregard the network link in the diagram. Since you will be using a single Amazon Elastic Compute Cloud (Amazon EC2) instance in one Region, the network connection is needed between Amazon VPCs in the two Regions to allow that instance to connect to a domain controller in each Region.

Prerequisites for AWS Managed Microsoft AD cross-Region Support

  1. An AWS Managed Microsoft AD deployed in a Region of your choice. If you don’t have one already deployed, you can follow the instructions in Create Your AWS Managed Microsoft AD directory to create one. For this post, I recommend that you use us-east-1.
  2. The VPC must be peered in order to complete the steps in this blog. Creating and accepting a VPC peering connection has information on how to create a peering connection between Regions. Be aware of unsupported VPC peering configurations.
  3. A Windows Server instance joined to your managed Active Directory domain. Join an EC2 Instance to Your AWS Managed Microsoft AD Directory has instructions if you need assistance.
  4. Install the Active Directory administration tools onto your domain-joined instance. Installing the Active Directory Administration Tools has detailed instructions.

Extend your AWS Managed Microsoft AD to another Region

We’ve made the process to extend your directory to another Region straightforward. There is no cost to add another Region; you only pay for the resources for your directory running in the new Region. See here for additional information on pricing changes with new Regions. For example, in this post you will be extending your directory into the us-east-2 region. There will be an additional cost for two new domain controllers. Figure 3 shows the additional cost to extend the directory.

Let’s walk through the steps of setting up Windows Authentication with Amazon RDS for SQL Server instances in multiple Regions using a single cross-Region AWS Managed Microsoft AD.

To extend your directory to another Region:

  1. In the AWS Directory Service console navigation pane, choose Directories.

    Note: You should see a list of your AWS Managed Microsoft AD directories.

  2. Choose the Directory ID of the directory you want to expand to another Region.
  3. Go to the Directory details page. In the Multi-region replication section, select Add Region.

    Figure 2: Directory details and new multi-Region replication pane

    Figure 2: Directory details and new multi-Region replication pane

  4. On the Add region page:
    1. For Region to add, select the Region you want to extend your directory to.
    2. For VPC, select the Amazon Virtual Private Cloud (Amazon VPC) for the new domain controllers to use.
    3. For Subnets, select two unique subnets in the Amazon VPC that you selected in the preceding step.
    4. Once you have everything to your liking, choose Add.
      Figure 3: Add a Region

      Figure 3: Add a Region

      In the background, AWS is provisioning two new AWS managed domain controllers in the Region you selected. It could take up to 2 hours for your directory to become available in the Region.

Note: Your managed domain controllers in the home Region are fully functional during this process.

  • On the Directory details page, in Multi-Region replication, the status should be Active when the process has completed. Now you’re ready to deploy your Amazon RDS SQL Server instances.

Enable Amazon RDS for SQL Server

Integrating Amazon RDS into AWS Managed Microsoft AD is exactly the same process as it was before the cross-Region feature was released. This post goes through that original process with only one change, which is that you select the same directory ID for both Regions.

Create an Amazon RDS SQL Server instance in each Region using the same directory

The steps for creating an Amazon RDS SQL Server instance in each Region are the same. The following process will create the first instance. Once you’ve completed the process, you change the AWS Management Console Region to the Region you extended your directory to and repeat the process.

To create an Amazon RDS SQL Server instance:

  1. In the AWS Managed Microsoft AD directory primary Region, go to the Amazon RDS console navigation pane and choose Create database.
  2. Choose Microsoft SQL Server.
  3. You can leave the default values, except for the following settings:
    1. Under Settings select Master and Confirm password.
    2. Under Connectivity, expand Additional connectivity configuration:
      1. Choose Create new to create a new VPC security group.
      2. Enter a name in New VPC security group name.
      3. Select No preference for Availability Zone.
      4. Enter 1433 for Database port.
      Figure 4: Connectivity settings

      Figure 4: Connectivity settings

  4. Select the Enable Microsoft SQL Server Windows authentication check box and then choose Browse Directory.

    Figure 5: Enable Microsoft SQL Server Windows authentication selected

    Figure 5: Enable Microsoft SQL Server Windows authentication selected

  5. Select your directory and select Choose.

    Figure 6: Select a directory

    Figure 6: Select a directory

  6. Choose Create database.
  7. Repeat these steps in your expanded Region. Note that the Directory ID will be the same for both Regions. You can complete the next section while your Amazon RDS SQL instances are provisioning.

Create an Active Directory user and group to delegate SQL administrative rights

The following steps walk you through the process of creating an Active Directory user and group for delegation. Following this process, you add the user to the group you just created and to the AWS Delegated Server Administrators group.

To create a user and group:

  1. Log in to the domain-joined instance with a domain user account that has permissions to create Active Directory users and groups.
  2. Choose Start, enter dsa.msc, and press Enter.
  3. In Active Directory Users and Computers, right-click on the Users OU, select New, and then Group. The New Object – Group window pops up.
    1. Fill in the Group name boxes with your choice of name.
    2. For Group Scope, select Domain local.
    3. For Group type, select Security.
    4. Choose OK.
  4. In Active Directory Users and Computers, right-click on your Users OU and select New and then User. The New Object – User window pops up.
    1. Fill in the boxes with your choice of information, and then choose Next.
    2. Enter your choice of password and clear User must change password at next logon, then choose Next.
    3. On the confirmation page, choose Finish.
  5. Double-click on the user you just created. The user account properties window appears.
    1. Select the Member of tab.
    2. Choose Add.
    3. Enter the name of the group that you previously created and choose Check Names. Next, enter AWS Delegated Server Administrators and choose Check Names again. If you do not receive any error, choose OK, and then OK again.
  6. The Member of tab for the user should include the two groups you just added. Choose OK to close the properties page.

Delegate SQL Server permissions in each Region using the Active Directory group you just created

The following steps guide you through the process of modifying the Amazon RDS SQL security group, installing SQL Server Management Studio (SSMS), and delegating permission in SQL to your Active Directory group.

Modify the Amazon RDS SQL security group

In these next steps, you modify the security group you created with your Amazon RDS instances, allowing your Windows Server instance to connect to the Amazon RDS SQL Server instances over port 1433.

To modify the security group:

  1. From the Amazon Elastic Compute Cloud (Amazon EC2) console, select Security Groups under the Network & Security navigation section.
  2. Select the new Amazon RDS SQL security group that was created with your Amazon RDS SQL instance and select Edit inbound rules.
  3. Choose Add rule and enter the following:
    1. Type – Select Custom TCP.
    2. Protocol – Select TCP.
    3. Port range – Enter 1433.
    4. Source – Select Custom.
    5. Enter the private IP of your instance with a /32. An example would be 10.0.0.10/32.
  4. Choose Save rules.

    Figure 7: Create a security group rule

    Figure 7: Create a security group rule

  5. Repeat these steps on the security group of your other Amazon RDS SQL instance in the other Region.

Install SQL Server Management Studio

All of the steps after the first are done on the Windows Server instance from Prerequisite 3.

To install SMMS:

  1. On your local computer, download SQL Server Management Studio (SSMS).

    Note: If desired, you can disable IE Enhanced Security Configuration and download directly to the Windows Server instance using IE or any other browser, and skip to step 3.

  2. RDP into your Windows Server instance and copy SSMS-Setup-ENU.exe to your RDP session.
  3. Run the file on your Windows Server instance.
  4. Choose Install.

    Figure 8: Install SMMS

    Figure 8: Install SMMS

  5. It might take a few minutes to install. When complete, choose Close.

Delegate permissions in SSMS

All of the following steps are performed on the Windows Server instance from Prerequisite 3. Log in to the Amazon RDS SQL instance using the SQL master user account. Next, create a SQL login for the Active Directory group you created previously and give it elevated permission to the Amazon RDS SQL instance.

To delegate permissions:

  1. Start SMMS.
  2. On the Connect to Server window, enter or select:
    1. Server name – Your Amazon RDS SQL Server endpoint.
    2. Authentication – Select SQL Server Authentication.
    3. Login – Enter the master user name you used when you launched your Amazon RDS SQL instance. The default is admin.
    4. Password – Enter the password for the master user name.
    5. Choose Connect.
    Figure 9: Connect to server

    Figure 9: Connect to server

  3. In SMMS, Choose New Query at the top of the window.

    Figure 10: New query

    Figure 10: New query

  4. In the query window, enter the following query. Replace <CORP\SQL-Admins> with the name of the group you created earlier.
    CREATE LOGIN [<CORP\SQL-Admins>] FROM WINDOWS WITH DEFAULT_DATABASE = [master],
       DEFAULT_LANGUAGE = [us_english];
    

    Figure 11: Query SQL database

    Figure 11: Query SQL database

  5. Choose Execute on the menu bar. You should see a Commands completed successfully message.

    Figure 12: Commands completed successfully

    Figure 12: Commands completed successfully

  6. Next, navigate to the Logins directory on the navigation page. Right-click on the group you added with the SQL command in step 5 and select Properties.

    Figure 13: Open group properties

    Figure 13: Open group properties

  7. Select Server Roles and select the processadmin and setupadmin checkboxes. Then choose OK.

    Figure 14: Configure server roles

    Figure 14: Configure server roles

  8. You can log off from the instance. For the next steps, you log in to the instance using the user account you created previously.
  9. Repeat these steps on the Amazon RDS SQL instance in the other Region.

Connect to the Amazon RDS SQL Server with the same Active Directory user in both Regions

All of the steps are performed on the Windows Server instance from Prerequisite 3. You must log in to the instance using the account you created earlier. You then log in to the Amazon RDS SQL instance using Windows authentication with that account.

  1. Log in to the instance with the user account you created earlier.
  2. Start SSMS.
  3. On the Connect to Server window, enter or select:
    1. Server name: Your Amazon RDS SQL Server endpoint.
    2. Authentication: Select Windows Authentication.
    3. Choose Connect.
    Figure 15: Connect to server

    Figure 15: Connect to server

  4. You should be logged in to SSMS. If you aren’t logged in, make sure you added your user account to the group you created earlier and try again.
  5. Repeat these steps using the other Amazon RDS SQL instance endpoint for the server name. You should be able to connect to both Amazon RDS SQL instances using the same user account.

Summary

In this post, you extended your AWS Managed Microsoft AD into a new Region. You then deployed Amazon RDS for SQL Server in multiple Regions attached to the same AWS Managed Microsoft AD directory. You then tested authentication to both Amazon RDS SQL instances using the same Active Directory user.

To learn more about using AWS Managed Microsoft AD or AD Connector, visit the AWS Directory Service documentation. For general information and pricing, see the AWS Directory Service home page. If you have comments about this blog post, submit a comment in the Comments section below. If you have implementation or troubleshooting questions, start a new thread on the AWS Directory Service forum or contact AWS Support.

Author

Jeremy Girven

Jeremy is a Solutions Architect specializing in Microsoft workloads on AWS. He has over 15 years of experience with Microsoft Active Directory and over 23 years of industry experience. One of his fun projects is using SSM to automate the Active Directory build processes in AWS. To see more please check out the Active Directory AWS QuickStart.