Post Syndicated from Subham Rakshit original https://aws.amazon.com/blogs/big-data/configure-a-custom-domain-name-for-your-amazon-msk-cluster/
Amazon Managed Streaming for Kafka (Amazon MSK) is a fully managed service that enables you to build and run applications that use Apache Kafka to process streaming data. It runs open-source versions of Apache Kafka. This means existing applications, tooling, and plugins from partners and the Apache Kafka community are supported without requiring changes to application code.
Customers use Amazon MSK for real-time data sharing with their end customers, who could be internal teams or third parties. These end customers manage Kafka clients, which are deployed in AWS, other managed cloud providers, or on premises. When migrating from self-managed to Amazon MSK or moving clients between MSK clusters, customers want to avoid the need for Kafka client reconfiguration, to use a different Domain Name System (DNS) name. Therefore, it’s important to have a custom domain name for the MSK cluster that the clients can communicate to. Also, having a custom domain name makes the disaster recovery (DR) process less complicated because clients don’t need to change the MSK bootstrap address when either a new cluster is created or a client connection needs to be redirected to a DR AWS Region.
MSK clusters use AWS-generated DNS names that are unique for each cluster, containing the broker ID, MSK cluster name, two service generated sub-domains, and the AWS Region, ending with amazonaws.com
. The following figure illustrates this naming format.
MSK brokers use the same DNS name for the certificates used for Transport Layer Security (TLS) connections. The DNS name used by clients with TLS encrypted authentication mechanisms must match the primary Common Name (CN), or Subject Alternative Name (SAN) of the certificate presented by the MSK broker, to avoid hostname validation errors.
The solution discussed in this post provides a way for you to use a custom domain name for clients to connect to their MSK clusters when using SASL/SCRAM (Simple Authentication and Security Layer/ Salted Challenge Response Mechanism) authentication only.
Solution overview
Network Load Balancers (NLBs) are a popular addition to the Amazon MSK architecture, along with AWS PrivateLink as a way to expose connectivity to an MSK cluster from other virtual private clouds (VPCs). For more details, see How Goldman Sachs builds cross-account connectivity to their Amazon MSK clusters with AWS PrivateLink. In this post, we run through how to use an NLB to enable the use of a custom domain name with Amazon MSK when using SASL/SCRAM authentication.
The following diagram shows all components used by the solution.
SASL/SCRAM uses TLS to encrypt the Kafka protocol traffic between the client and Kafka broker. To use a custom domain name, the client needs to be presented with a server certificate matching that custom domain name. As of this writing, it isn’t possible to modify the certificate used by the MSK brokers, so this solution uses an NLB to sit between the client and MSK brokers.
An NLB works at the connection layer (Layer 4) and routes the TCP or UDP protocol traffic. It doesn’t validate the application data being sent and forwards the Kafka protocol traffic. The NLB provides the ability to use a TLS listener, where a certificate is imported into AWS Certificate Manager (ACM) and associated with the listener and enables TLS negotiation between the client and the NLB. The NLB performs a separate TLS negotiation between itself and the MSK brokers. This NLB TLS negotiation to the target works exactly the same irrespective of whether certificates are signed by a public or private Certificate Authority (CA).
For the client to resolve DNS queries for the custom domain, an Amazon Route 53 private hosted zone is used to host the DNS records, and is associated with the client’s VPC to enable DNS resolution from the Route 53 VPC resolver.
Kafka listeners and advertised listeners
Kafka listeners (listeners
) are the lists of addresses that Kafka binds to for listening. A Kafka listener is composed of a hostname or IP, port, and protocol: <protocol>://<hostname>:<port>
.
The Kafka client uses the bootstrap address to connect to one of the brokers in the cluster and issues a metadata request. The broker provides a metadata response containing the address information of each broker that the client needs to connect to talk to these brokers. Advertised listeners (advertised.listeners
) is a configuration option used by Kafka clients to connect to the brokers. By default, an advertised listener is not set. After it’s set, Kafka clients will use the advertised listener instead of listeners
to obtain the connection information for brokers.
When Amazon MSK multi-VPC private connectivity is enabled, AWS sets the advertised.listeners
configuration option to include the Amazon MSK multi-VPC DNS alias.
MSK brokers use the listener configuration to tell clients the DNS names to use to connect to the individual brokers for each authentication type enabled. Therefore, when clients are directed to use the custom domain name, you need to set a custom advertised listener for SASL/SCRAM authentication protocol. Advertised listeners are unique to each broker; the cluster won’t start if multiple brokers have the same advertised listener address.
Kafka bootstrap process and setup options
A Kafka client uses the bootstrap addresses to get the metadata from the MSK cluster, which in response provides the broker hostname and port (the listeners information by default or the advertised listener if it’s configured) that the client needs to connect to for subsequent requests. Using this information, the client connects to the appropriate broker for the topic or partition that it needs to send to or fetch from. The following diagram shows the default bootstrap and topic or partition connectivity between a Kafka client and MSK broker.
You have two options when using a custom domain name with Amazon MSK.
Option 1: Only a bootstrap connection through an NLB
You can use a custom domain name only for the bootstrap connection, where the advertised listeners are not set, so the client is directed to the default AWS cluster DNS name. This option is beneficial when the Kafka client has direct network connectivity to both the NLB and the MSK broker’s Elastic Network Interface (ENI). The following diagram illustrates this setup.
No changes are required to the MSK brokers, and the Kafka client has the custom domain set as the bootstrap address. The Kafka client uses the custom domain bootstrap address to send a get metadata request to the NLB. The NLB sends the Kafka protocol traffic received by the Kafka client to a healthy MSK broker’s ENI. That broker responds with metadata where only listeners
is set, containing the default MSK cluster DNS name for each broker. The Kafka client then uses the default MSK cluster DNS name for the appropriate broker and connects to that broker’s ENI.
Option 2: All connections through an NLB
Alternatively, you can use a custom domain name for the bootstrap and the brokers, where the custom domain name for each broker is set in the advertised listeners configuration. You need to use this option when Kafka clients don’t have direct network connectivity to the MSK brokers ENI. For example, Kafka clients need to use an NLB, AWS PrivateLink, or Amazon MSK multi-VPC endpoints to connect to an MSK cluster. The following diagram illustrates this setup.
The advertised listeners are set to use the custom domain name, and the Kafka client has the custom domain set as the bootstrap address. The Kafka client uses the custom domain bootstrap address to send a get metadata request, which is sent to the NLB. The NLB sends the Kafka protocol traffic received by the Kafka client to a healthy MSK broker’s ENI. That broker responds with metadata where advertised listeners is set. The Kafka client uses the custom domain name for the appropriate broker, which directs the connection to the NLB, for the port set for that broker. The NLB sends the Kafka protocol traffic to that broker.
Network Load Balancer
The following diagram illustrates the NLB port and target configuration. A TLS listener with port 9000 is used for bootstrap connections with all MSK brokers set as targets. The listener uses TLS target type with target port as 9096. A TLS listener port is used to represent each broker in the MSK cluster. In this post, there are three brokers in the MSK cluster with TLS 9001, representing broker 1, up to TLS 9003, representing broker 3.
For all TLS listeners on the NLB, a single imported certificate with the domain name bootstrap.example.com
is attached to the NLB. bootstrap.example.com
is used as the Common Name (CN) so that the certificate is valid for the bootstrap address, and Subject Alternative Names (SANs) are set for all broker DNS names. If the certificate is issued by a private CA, clients need to import the root and intermediate CA certificates to the trust store. If the certificate is issued by a public CA, the root and intermediate CA certificates will be in the default trust store.
The following table shows the required NLB configuration.
NLB Listener Type |
NLB Listener Port |
Certificate |
NLB Target Type |
NLB Targets |
TLS |
9000 |
bootstrap.example.com |
TLS |
All Broker ENIs |
TLS |
9001 |
bootstrap.example.com |
TLS |
Broker 1 |
TLS |
9002 |
bootstrap.example.com |
TLS |
Broker 2 |
TLS |
9003 |
bootstrap.example.com |
TLS |
Broker 3 |
Domain Name System
For this post, a Route 53 private hosted zone is used to host the DNS records for the custom domain, in this case example.com
. The private hosted zone is associated with the Amazon MSK VPC, to enable DNS resolution for the client that is launched in the same VPC. If your client is in a different VPC than the MSK cluster, you need to associate the private hosted zone with that client’s VPC.
The Route 53 private hosted zone is not a required part of the solution. The most crucial part is that the client can perform DNS resolution against the custom domain and get the required responses. You can instead use your organization’s existing DNS, a Route 53 public hosted zone or Route 53 inbound resolver to resolve Route 53 private hosted zones from outside of AWS, or an alternative DNS solution.
The following figure shows the DNS records used by the client to resolve to the NLB. We use bootstrap
for the initial client connection, and use b-1
, b-2
, and b-3
to reference each broker’s name.
The following table lists the DNS records required for a three-broker MSK cluster when using a Route 53 private or public hosted zone.
Record |
Record Type |
Value |
bootstrap |
A |
NLB Alias |
b-1 |
A |
NLB Alias |
b-2 |
A |
NLB Alias |
b-3 |
A |
NLB Alias |
The following table lists the DNS records required for a three-broker MSK cluster when using other DNS solutions.
Record |
Record Type |
Value |
bootstrap |
C |
NLB DNS A Record (e.g. name-id.elb.region.amazonaws.com) |
b-1 |
C |
NLB DNS A Record |
b-2 |
C |
NLB DNS A Record |
b-3 |
C |
NLB DNS A Record |
In the following sections, we go through the steps to configure a custom domain name for your MSK cluster and clients connecting with the custom domain.
Prerequisites
To deploy the solution, you need the following prerequisites:
Launch the CloudFormation template
Complete the following steps to deploy the CloudFormation template:
- Choose Launch Stack.
- Provide the stack name as
msk-custom-domain
.
- For MSKClientUserName, enter the user name of the secret used for SASL/SCRAM authentication with Amazon MSK.
- For MSKClientUserPassword, enter the password of the secret used for SASL/SCRAM authentication with Amazon MSK.
The CloudFormation template will deploy the following resources:
Set up the EC2 instance
Complete the following steps to configure your EC2 instance:
- On the Amazon EC2 console, connect to the instance
msk-custom-domain-KafkaClientInstance1
using Session Manager, a capability of AWS Systems Manager.
- Switch to
ec2-user
:
- Run the following commands to configure the SASL/SCRAM client properties, create Kafka access control lists (ACLs), and create a topic named
customer
:
. ./cloudformation_outputs.sh
aws configure set region $REGION
export BS=$(aws kafka get-bootstrap-brokers --cluster-arn ${MSKClusterArn} | jq -r '.BootstrapBrokerStringSaslScram')
export ZOOKEEPER=$(aws kafka describe-cluster --cluster-arn $MSKClusterArn | jq -r '.ClusterInfo.ZookeeperConnectString')
./configure_sasl_scram_properties_and_kafka_acl.sh
Create a certificate
For this post, we use self-signed certificates. However, it’s recommended to use either a public certificate or a certificate signed by your organization’s private key infrastructure (PKI).
If you’re are using an AWS private CA for the private key infrastructure, refer to Creating a private CA for instructions to create and install a private CA.
Use the openSSL
command to create a self-signed certificate. Modify the following command, adding the country code, state, city, and company:
SSLCONFIG="[req]
prompt = no
distinguished_name = req_distinguished_name
x509_extensions = v3_ca
[req_distinguished_name]
C = <<Country_Code>>
ST = <<State>>
L = <<City>>
O = <<Company>>
OU =
emailAddress =
CN = botstrap.example.com
[v3_ca]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
subjectAltName = @alternate_names
[alternate_names]
DNS.1 = bootstrap.example.com
DNS.2 = b-1.example.com
DNS.3 = b-2.example.com
DNS.4 = b-3.example.com
"
openssl req -x509 -newkey rsa:2048 -days 365 -nodes \
-config <(echo "$SSLCONFIG") \
-keyout msk-custom-domain-pvt-key.pem \
-out msk-custom-domain-certificate.pem
You can check the created certificate using the following command:
openssl x509 -text -noout -in msk-custom-domain-certificate.pem
Import the certificate to ACM
To use the self-signed certificate for the solution, you need to import the certificate to ACM:
export CertificateARN=$(aws acm import-certificate --certificate file://msk-custom-domain-certificate.pem --private-key file://msk-custom-domain-pvt-key.pem | jq -r '.CertificateArn')
echo $CertificateARN
After it’s imported, you can see the certificate in ACM.
Import the certificate to the Kafka client trust store
For the client to validate the server SSL certificate during the TLS handshake, you need to import the self-signed certificate to the client’s trust store.
- Run the following command to use the JVM trust store to create your client trust store:
cp /usr/lib/jvm/jre-1.8.0-openjdk/lib/security/cacerts /home/ec2-user/kafka.client.truststore.jks
chmod 700 kafka.client.truststore.jks
- Import the self-signed certificate to the trust store by using the following command. Provide the keystore password as
changeit
.
/usr/lib/jvm/jre-1.8.0-openjdk/bin/keytool -import \
-trustcacerts \
-noprompt \
-alias msk-cert \
-file msk-custom-domain-certificate.pem \
-keystore kafka.client.truststore.jks
- You need to include the trust store certificate location config properties used by Kafka clients to enable certification validation:
echo 'ssl.truststore.location=/home/ec2-user/kafka.client.truststore.jks' >> /home/ec2-user/kafka/config/client_sasl.properties
Set up DNS resolution for clients within the VPC
To set up DNS resolution for clients, create a private hosted zone for the domain and associate the hosted zone with the VPC where the client is deployed:
aws route53 create-hosted-zone \
--name example.com \
--caller-reference "msk-custom-domain" \
--hosted-zone-config Comment="Private Hosted Zone for MSK",PrivateZone=true \
--vpc VPCRegion=$REGION,VPCId=$MSKVPCId
export HostedZoneId=$(aws route53 list-hosted-zones-by-vpc --vpc-id $MSKVPCId --vpc-region $REGION | jq -r '.HostedZoneSummaries[0].HostedZoneId')
Create EC2 target groups
Target groups route requests to individual registered targets, such as EC2 instances, using the protocol and port number that you specify. You can register a target with multiple target groups and you can register multiple targets to one target group.
For this post, you need four target groups: one for each broker instance and one that will point to all the brokers and will be used by clients for Amazon MSK connection bootstrapping.
The target group will receive traffic on port 9096 (SASL/SCRAM authentication) and will be associated with the Amazon MSK VPC:
aws elbv2 create-target-group \
--name b-all-bootstrap \
--protocol TLS \
--port 9096 \
--target-type ip \
--vpc-id $MSKVPCId
aws elbv2 create-target-group \
--name b-1 \
--protocol TLS \
--port 9096 \
--target-type ip \
--vpc-id $MSKVPCId
aws elbv2 create-target-group \
--name b-2 \
--protocol TLS \
--port 9096 \
--target-type ip \
--vpc-id $MSKVPCId
aws elbv2 create-target-group \
--name b-3 \
--protocol TLS \
--port 9096 \
--target-type ip \
--vpc-id $MSKVPCId
Register target groups with MSK broker IPs
You need to associate each target group with the broker instance (target) in the MSK cluster so that the traffic going through the target group can be routed to the individual broker instance.
Complete the following steps:
- Get the MSK broker hostnames:
This should show the brokers, which are part of bootstrap address. The hostname of broker 1 looks like the following code:
b-1.mskcustomdomaincluster.xxxxx.yy.kafka.region.amazonaws.com
To get the hostname of other brokers in the cluster, replace b-1
with values like b-2
, b-3
, and so on. For example, if you have six brokers in the cluster, you will have six broker hostnames starting with b-1
to b-6
.
- To get the IP address of the individual brokers, use the
nslookup
command:
nslookup b-1.mskcustomdomaincluster.xxxxx.yy.kafka.region.amazonaws.com Server: 172.16.0.2
Address: 172.16.0.2#53
Non-authoritative answer:
Name: b-1.mskcustomdomaincluster.xxxxx.yy.kafka.region.amazonaws.com
Address: 172.16.1.225
- Modify the following commands with the IP addresses of each broker to create an environment variable that will be used later:
export B1=<<b-1_IP_Address>>
export B2=<<b-2_IP_Address>>
export B3=<<b-3_IP_Address>>
Next, you need to register the broker IP with the target group. For broker b-1
, you will register the IP address with target group b-1
.
- Provide the target group name
b-1
to get the target group ARN. Then register the broker IP address with the target group.
export TARGET_GROUP_B_1_ARN=$(aws elbv2 describe-target-groups --names b-1 | jq -r '.TargetGroups[0].TargetGroupArn')
aws elbv2 register-targets \
--target-group-arn ${TARGET_GROUP_B_1_ARN} \
--targets Id=$B1
- Iterate the steps of obtaining the IP address from other broker hostnames and register the IP address with the corresponding target group for brokers
b-2
and b-3
:
B-2
export TARGET_GROUP_B_2_ARN=$(aws elbv2 describe-target-groups --names b-2 | jq -r '.TargetGroups[0].TargetGroupArn')
aws elbv2 register-targets \
--target-group-arn ${TARGET_GROUP_B_2_ARN} \
--targets Id=$B2
B-3
export TARGET_GROUP_B_3_ARN=$(aws elbv2 describe-target-groups --names b-3 | jq -r '.TargetGroups[0].TargetGroupArn')
aws elbv2 register-targets \
--target-group-arn ${TARGET_GROUP_B_3_ARN} \
--targets Id=$B3
- Also, you need to register all three broker IP addresses with the target group b-all-bootstrap. This target group will be used for routing the traffic for the Amazon MSK client connection bootstrap process.
export TARGET_GROUP_B_ALL_ARN=$(aws elbv2 describe-target-groups --names b-all-bootstrap | jq -r '.TargetGroups[0].TargetGroupArn')
aws elbv2 register-targets \
--target-group-arn ${TARGET_GROUP_B_ALL_ARN} \
--targets Id=$B1 Id=$B2 Id=$B3
Set up NLB listeners
Now that you have the target groups created and certificate imported, you’re ready to create the NLB and listeners.
Create the NLB with the following code:
aws elbv2 create-load-balancer \
--name msk-nlb-internal \
--scheme internal \
--type network \
--subnets $MSKVPCPrivateSubnet1 $MSKVPCPrivateSubnet2 $MSKVPCPrivateSubnet3 \
--security-groups $NLBSecurityGroupId
export NLB_ARN=$(aws elbv2 describe-load-balancers --names msk-nlb-internal | jq -r '.LoadBalancers[0].LoadBalancerArn')
Next, you configure the listeners that will be used by the clients to communicate with the MSK cluster. You need to create four listeners, one for each target group for ports 9000–9003. The following table lists the listener configurations.
Protocol |
Port |
Certificate |
NLB Target Type |
NLB Targets |
TLS |
9000 |
bootstrap.example.com |
TLS |
b-all-bootstrap |
TLS |
9001 |
bootstrap.example.com |
TLS |
b-1 |
TLS |
9002 |
bootstrap.example.com |
TLS |
b-2 |
TLS |
9003 |
bootstrap.example.com |
TLS |
b-3 |
Use the following code for port 9000:
aws elbv2 create-listener \
--load-balancer-arn $NLB_ARN \
--protocol TLS \
--port 9000 \
--certificates CertificateArn=$CertificateARN \
--ssl-policy ELBSecurityPolicy-TLS13-1-2-2021-06 \
--default-actions Type=forward,TargetGroupArn=$TARGET_GROUP_B_ALL_ARN
Use the following code for port 9001:
aws elbv2 create-listener \
--load-balancer-arn $NLB_ARN \
--protocol TLS \
--port 9001 \
--certificates CertificateArn=$CertificateARN \
--ssl-policy ELBSecurityPolicy-TLS13-1-2-2021-06 \
--default-actions Type=forward,TargetGroupArn=$TARGET_GROUP_B_1_ARN
Use the following code for port 9002:
aws elbv2 create-listener \
--load-balancer-arn $NLB_ARN \
--protocol TLS \
--port 9002 \
--certificates CertificateArn=$CertificateARN \
--ssl-policy ELBSecurityPolicy-TLS13-1-2-2021-06 \
--default-actions Type=forward,TargetGroupArn=$TARGET_GROUP_B_2_ARN
Use the following code for port 9003:
aws elbv2 create-listener \
--load-balancer-arn $NLB_ARN \
--protocol TLS \
--port 9003 \
--certificates CertificateArn=$CertificateARN \
--ssl-policy ELBSecurityPolicy-TLS13-1-2-2021-06 \
--default-actions Type=forward,TargetGroupArn=$TARGET_GROUP_B_3_ARN
Enable cross-zone load balancing
By default, cross-zone load balancing is disabled on NLBs. When disabled, each load balancer node distributes traffic to healthy targets in the same Availability Zone. For example, requests that come into the load balancer node in Availability Zone A will only be forwarded to a healthy target in Availability Zone A. If the only healthy target or the only registered target associated to an NLB listener is in another Availability Zone than the load balancer node receiving the traffic, the traffic is dropped.
Because the NLB has the bootstrap listener that is associated with a target group that has all brokers registered across multiple Availability Zones, Route 53 will respond to DNS queries against the NLB DNS name with the IP address of NLB ENIs in Availability Zones with healthy targets.
When the Kafka client tries to connect to a broker through the broker’s listener on the NLB, there will be a noticeable delay in receiving a response from the broker as the client tries to connect to the broker using all IPs returned by Route 53.
Enabling cross-zone load balancing distributes the traffic across the registered targets in all Availability Zones.
aws elbv2 modify-load-balancer-attributes --load-balancer-arn $NLB_ARN --attributes Key=load_balancing.cross_zone.enabled,Value=true
Create DNS A records in a private hosted zone
Create DNS A records to route the traffic to the network load balancer. The following table lists the records.
Record |
Record Type |
Value |
bootstrap |
A |
NLB Alias |
b-1 |
A |
NLB Alias |
b-2 |
A |
NLB Alias |
b-3 |
A |
NLB Alias |
Alias record types will be used, so you need the NLB’s DNS name and hosted zone ID:
export NLB_DNS=$(aws elbv2 describe-load-balancers --names msk-nlb-internal | jq -r '.LoadBalancers[0].DNSName')
export NLB_ZoneId=$(aws elbv2 describe-load-balancers --names msk-nlb-internal | jq -r '.LoadBalancers[0].CanonicalHostedZoneId')
Create the bootstrap
record, and then repeat this command to create the b-1
, b-2
, and b-3
records, modifying the Name
field:
aws route53 change-resource-record-sets \
--hosted-zone-id $HostedZoneId \
--change-batch file://<(cat << EOF
{
"Comment": "Create bootstrap record",
"Changes": [{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "bootstrap.example.com",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "$NLB_ZoneId",
"DNSName": "$NLB_DNS",
"EvaluateTargetHealth": true
}
}
}]
}
EOF)
Optionally, to optimize cross-zone data charges, you can set b-1
, b-2
, and b-3
to the IP address of the NLB’s ENI that is in the same Availability Zone as each broker. For example, if b-2
is using an IP address that is in subnet 172.16.2.0/24
, which is in Availability Zone A, you should use the NLB ENI that is in the same Availability Zone as the value for the DNS record.
The next step details how to use a custom domain name for bootstrap connectivity only. If all Kafka traffic needs to go through the NLB, as discussed earlier, proceed to the subsequent section to set up advertised listeners.
Configure the advertised listener in the MSK cluster
To get the listener details for broker 1, you provide entity-type as brokers and entity-name as 1 for the broker ID:
/home/ec2-user/kafka/bin/kafka-configs.sh --bootstrap-server $BS \
--entity-type brokers \
--entity-name 1 \
--command-config ~/kafka/config/client_sasl.properties \
--all \
--describe | grep 'listeners=CLIENT_SASL_SCRAM'
You will get an output like the following:
Listeners=CLIENT_SASL_SCRAM://b-1.mskcustomdomaincluster.XXXX.yy.kafka.region.amazonaws.com:9096,CLIENT_SECURE://b-1.mskcustomdomaincluster.XXXX.yy.kafka.region.amazonaws.com:9094,REPLICATION://b-1.mskcustomdomaincluster.XXXX.yy.kafka.region.amazonaws.com:9093,REPLICATION_SECURE:// b-1.mskcustomdomaincluster.XXXX.yy.kafka.region.amazonaws.com:9095 sensitive=false synonyms={STATIC_BROKER_CONFIG:listeners=CLIENT_SASL_SCRAM://b-1.mskcustomdomaincluster.XXXX.yy.kafka.region.amazonaws.com:9096,CLIENT_SECURE://b-1.mskcustomdomaincluster.XXXX.yy.kafka.region.amazonaws.com:9094,REPLICATION://b-1.mskcustomdomaincluster.XXXX.yy.kafka.region.amazonaws.com:9093,REPLICATION_SECURE:// b-1.mskcustomdomaincluster.XXXX.yy.kafka.region.amazonaws.com:9095}
Going forward, clients will connect through the custom domain name. Therefore, you need to configure the advertised listeners to the custom domain hostname and port. For this, you need to copy the listener details and change the CLIENT_SASL_SCRAM
listener to b-1.example.com:9001
.
While you’re configuring the advertised listener, you also need to preserve the information about other listener types in the advertised listener because inter-broker communications also use the addresses in the advertised listener.
Based on our configuration, the advertised listener for broker 1 will look like the following code, with everything after sensitive=false
removed:
CLIENT_SASL_SCRAM://b-1.example.com:9001,REPLICATION://b-1-internal.mskcustomdomaincluster.xxxxxx.yy.kafka.region.amazonaws.com:9093,REPLICATION_SECURE://b-1-internal.mskcustomdomaincluster.xxxxxx.yy.kafka.region.amazonaws.com:9095
Modify the following command as follows:
- <<BROKER_NUMBER>> – Set to the broker ID being changed (for example, 1 for broker 1)
- <<PORT_NUMBER>> – Set to the port number corresponding to broker ID (for example, 9001 for broker 1)
- <<REPLICATION_DNS_NAME>> – Set to the DNS name for
REPLICATION
- <<REPLICATION_SECURE_DNS_NAME>> – Set to the DNS name for
REPLICATION_SECURE
/home/ec2-user/kafka/bin/kafka-configs.sh --alter \
--bootstrap-server $BS \
--entity-type brokers \
--entity-name <<BROKER_NUMBER>> \
--command-config ~/kafka/config/client_sasl.properties \
--add-config advertised.listeners=[CLIENT_SASL_SCRAM://b-<<BROKER_NUMBER>>.example.com:<<PORT_NUMBER>>,REPLICATION://<<REPLICATION_DNS_NAME>>:9093,REPLICATION_SECURE://<<REPLICATION_SECURE_DNS_NAME>>:9095]
The command should look something like the following example:
/home/ec2-user/kafka/bin/kafka-configs.sh --alter \
--bootstrap-server $BS \
--entity-type brokers \
--entity-name 1 \
--command-config ~/kafka/config/client_sasl.properties \
--add-config advertised.listeners=[CLIENT_SASL_SCRAM://b-1.example.com:9001,REPLICATION://b-1-internal.mskcustomdomaincluster.xxxxxx.yy.kafka.region.amazonaws.com:9093,REPLICATION_SECURE://b-1-internal.mskcustomdomaincluster.xxxxxx.yy.kafka.region.amazonaws.com:9095]
Run the command to add the advertised listener for broker 1.
You need to get the listener
details for the other brokers and configure the advertised.listener
for each.
Test the setup
Set the bootstrap address to the custom domain. This is the A record created in the private hosted zone.
export BS=bootstrap.example.com:9000
List the MSK topics using the custom domain bootstrap address:
/home/ec2-user/kafka/bin/kafka-topics.sh --list \
--bootstrap-server $BS \
--command-config=/home/ec2-user/kafka/config/client_sasl.properties
You should see the topic customer
.
Clean up
To stop incurring costs, it’s recommended to manually delete the private hosted zone, NLB, target groups, and imported certificate in ACM. Also, delete the CloudFormation stack to remove any resources provisioned by CloudFormation.
Use the following code to manually delete the aforementioned resources:
aws route53 change-resource-record-sets \
--hosted-zone-id $HostedZoneId \
--change-batch file://<(cat << EOF
{
"Changes": [
{
"Action": "DELETE",
"ResourceRecordSet": {
"Name": "bootstrap.example.com",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "$NLB_ZoneId",
"DNSName": "$NLB_DNS",
"EvaluateTargetHealth": true
}
}
}
]
}
EOF
)
aws route53 change-resource-record-sets \
--hosted-zone-id $HostedZoneId \
--change-batch file://<(cat << EOF
{
"Changes": [
{
"Action": "DELETE",
"ResourceRecordSet": {
"Name": "b-1.example.com",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "$NLB_ZoneId",
"DNSName": "$NLB_DNS",
"EvaluateTargetHealth": true
}
}
}
]
}
EOF
)
aws route53 change-resource-record-sets \
--hosted-zone-id $HostedZoneId \
--change-batch file://<(cat << EOF
{
"Changes": [
{
"Action": "DELETE",
"ResourceRecordSet": {
"Name": "b-2.example.com",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "$NLB_ZoneId",
"DNSName": "$NLB_DNS",
"EvaluateTargetHealth": true
}
}
}
]
}
EOF
)
aws route53 change-resource-record-sets \
--hosted-zone-id $HostedZoneId \
--change-batch file://<(cat << EOF
{
"Changes": [
{
"Action": "DELETE",
"ResourceRecordSet": {
"Name": "b-3.example.com",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "$NLB_ZoneId",
"DNSName": "$NLB_DNS",
"EvaluateTargetHealth": true
}
}
}
]
}
EOF
)
aws route53 delete-hosted-zone --id $HostedZoneId
aws elbv2 delete-load-balancer --load-balancer-arn $NLB_ARN
aws elbv2 delete-target-group --target-group-arn $TARGET_GROUP_B_ALL_ARN
aws elbv2 delete-target-group --target-group-arn $TARGET_GROUP_B_1_ARN
aws elbv2 delete-target-group --target-group-arn $TARGET_GROUP_B_2_ARN
aws elbv2 delete-target-group --target-group-arn $TARGET_GROUP_B_3_ARN
You need to wait up to 5 minutes for the completion of the NLB deletion:
aws acm delete-certificate --certificate-arn $CertificateARN
Now you can delete the CloudFormation stack.
Summary
This post explains how you can use an NLB, Route 53, and the advertised listener configuration option in Amazon MSK to support custom domain names with MSK clusters when using SASL/SCRAM authentication. You can use this solution to keep your existing Kafka bootstrap DNS name and reduce or remove the need to change client applications because of a migration, recovery process, or multi-cluster high availability. You can also use this solution to have the MSK bootstrap and broker names under your custom domain, enabling you to bring the DNS name in line with your naming convention (for example, msk.prod.example.com
).
Try the solution out for yourself, and leave your questions and feedback in the comments section.
About the Authors
Subham Rakshit is a Senior Streaming Solutions Architect for Analytics at AWS based in the UK. He works with customers to design and build streaming architectures so they can get value from analyzing their streaming data. His two little daughters keep him occupied most of the time outside work, and he loves solving jigsaw puzzles with them. Connect with him on LinkedIn.
Mark Taylor is a Senior Technical Account Manager at Amazon Web Services, working with enterprise customers to implement best practices, optimize AWS usage, and address business challenges. Prior to joining AWS, Mark spent over 16 years in networking roles across industries, including healthcare, government, education, and payments. Mark lives in Folkestone, England, with his wife and two dogs. Outside of work, he enjoys watching and playing football, watching movies, playing board games, and traveling.