This post courtesy of Jeff Levine Solutions Architect for Amazon Web Services
Amazon Linux 2 is the next generation of Amazon Linux, a Linux server operating system from Amazon Web Services (AWS). Amazon Linux 2 offers a high-performance Linux environment suitable for organizations of all sizes. It supports applications ranging from small websites to enterprise-class, mission-critical platforms.
Amazon Linux 2 includes support for the LAMP (Linux/Apache/MariaDB/PHP) stack, one of the most popular platforms for deploying websites. To secure the transmission of data-in-transit to such websites and prevent eavesdropping, organizations commonly leverage Secure Sockets Layer/Transport Layer Security (SSL/TLS) services which leverage certificates to provide encryption. The LAMP stack provided by Amazon Linux 2 includes a self-signed SSL/TLS certificate. Such certificates may be fine for internal usage but are not acceptable when attestation by a certificate authority is required.
In this post, I discuss how to extend the capabilities of Amazon Linux 2 by installing Let’s Encrypt, a certificate authority provided by the Internet Security Research Group. Let’s Encrypt offers basic SSL/TLS certificates for DNS hosts at no charge that you can use to add encryption-in-transit to a single web server. For commercial or multi-server configurations, you should consider AWS Certificate Manager and Elastic Load Balancing.
Let’s Encrypt also requires the certbot package, which you install from EPEL, the Extra Packaged for Enterprise Linux collection. Although EPEL is not included with Amazon Linux 2, I show how you can install it from the Fedora Project.
Walkthrough
At a high level, you perform the following tasks for this walkthrough:
Provision a VPC, Amazon Linux 2 instance, and LAMP stack.
Install and enable the EPEL repository.
Install and configure Let’s Encrypt.
Validate the installation.
Clean up.
Prerequisites and costs
To follow along with this walkthrough, you need the following:
Accept all other default values including with regard to storage.
Create a new security group and accept the default rule that allows TCP port 22 (SSH) from everywhere (0.0.0.0/0 in IPv4). For the purposes of this walkthrough, permitting access from all IP addresses is reasonable. In a production environment, you may restrict access to different addresses.
Allocate and associate an Elastic IP address to the server when it enters the running state.
Respond “Y” to all requests for approval to install the software.
Step 3: Install and configure Let’s Encrypt
If you are no longer connected to the Amazon Linux 2 instance, connect to it at the Elastic IP address that you just created.
Install certbot, the Let’s Encrypt client to be used to obtain an SSL/TLS certificate and install it into Apache.
sudo yum install python2-certbot-apache.noarch
Respond “Y” to all requests for approval to install the software. If you see a message appear about SELinux, you can safely ignore it. This is a known issue with the latest version of certbot.
Create a DNS “A record” that maps a host name to the Elastic IP address. For this post, assume that the name of the host is lamp.example.com. If you are hosting your DNS in Amazon Route 53, do this by creating the appropriate record set.
After the “A record” has propagated, browse to lamp.example.com. The Apache test page should appear. If the page does not appear, use a tool such as nslookup on your workstation to confirm that the DNS record has been properly configured.
You are now ready to install Let’s Encrypt. Let’s Encrypt does the following:
Confirms that you have control over the DNS domain being used, by having you create a DNS TXT record using the value that it provides.
Obtains an SSL/TLS certificate.
Modifies the Apache-related scripts to use the SSL/TLS certificate and redirects users browsing the site in HTTP mode to HTTPS mode.
Use the following command to install certbot:
sudo certbot -i apache -a manual \
--preferred-challenges dns -d lamp.example.com
The options have the following meanings:
-i apache Use the Apache installer.
-a manual Authenticate domain ownership manually.
--preferred-challenges dns Use DNS TXT records for authentication challenge.
-d lamp.example.com Specify the domain for the SSL/TLS certificate.
You are prompted for the following information: E-mail address for renewals? Enter an email address for certificate renewals. Accept the terms of services? Respond as appropriate. Send your e-mail address to the EFF? Respond as appropriate. Log your current IP address? Respond as appropriate.
You are prompted to deploy a DNS TXT record with the name “_acme-challenge.lamp.example.com” with the supplied value, as shown below.
After you enter the record, wait until the TXT record propagates. To look up the TXT record to confirm the deployment, use the nslookup command in a separate command window, as shown below. Remember to use the set ty=txt command before entering the TXT record. You are prompted to select a virtual host. There is only one, so choose 1. The final prompt asks whether to redirect HTTP traffic to HTTPS. To perform the redirection, choose 2. That completes the configuration of Let’s Encrypt.
Browse to the http:// lamp.example.com site. You are redirected to the SSL/TLS page https://lamp.example.com.
To look at the encryption information, use the appropriate actions within your browser. For example, in Firefox, you can open the padlock and traverse the menus. In the encryption technical details, you can see from the “Connection Encrypted” line that traffic to the website is now encrypted using TLS 1.2.
Security note: As of the time of publication, this website also supports TLS 1.0. I recommend that you disable this protocol because of some known vulnerabilities associated with it. To do this:
Edit the file /etc/letsencrypt/options-ssl-apache.conf.
Look for the line beginning with SSLProtocol and change it to the following:
SSLProtocol all -SSLv2 -SSLv3 -TLSv1
Save the file. After you make changes to this file, Let’s Encrypt no longer automatically updates it. Periodically check your log files for recommended updates to this file.
Restart the httpd server with the following command:
sudo service httpd restart
Step 5: Cleanup
Use the following steps to avoid incurring any further costs.
Terminate the Amazon Linux 2 instance that you created.
Release the Elastic IP address that you allocated.
Revert any DNS changes that you made, including the A and TXT records.
Conclusion
Amazon Linux 2 is an excellent option for hosting websites through the LAMP stack provided by the Amazon-Linux-Extras feature. You can then enhance the security of the Apache web server by installing EPEL and Let’s Encrypt. Let’s Encrypt provisions an SSL/TLS certificate, optionally installs it for you on the Apache server, and enables data-in-transit encryption. You can get started with Amazon Linux 2 in just a few clicks.
Many enterprises use Microsoft Active Directory to manage users, groups, and computers in a network. And a question is asked frequently: How can Active Directory users access big data workloads running on Amazon EMR with the same single sign-on (SSO) experience they have when accessing resources in the Active Directory network?
This post walks you through the process of using AWS CloudFormation to set up a cross-realm trust and extend authentication from an Active Directory network into an Amazon EMR cluster with Kerberos enabled. By establishing a cross-realm trust, Active Directory users can use their Active Directory credentials to access an Amazon EMR cluster and run jobs as themselves.
Walkthrough overview
In this example, you build a solution that allows Active Directory users to seamlessly access Amazon EMR clusters and run big data jobs. Here’s what you need before setting up this solution:
A possible limit increase for your account (Note: Usually a limit increase will not be necessary. See the AWS Service Limits documentation if you encounter a limit error while building the solution.)
To make it easier for you to get started, I created AWS CloudFormation templates that automatically configure and deploy the solution for you. The following steps and resources are involved in setting up the solution:
Note: If you want to manually create and configure the components for this solution without using AWS CloudFormation, refer to the Amazon EMR cross-realm documentation. IMPORTANT: The AWS CloudFormation templates used in this post are designed to work only in the us-east-1 (N. Virginia) Region. They are not intended for production use without modification.
Single-step solution deployment
If you don’t want to set up each component individually, you can use the single-step AWS CloudFormation template. The single-step template is a master template that uses nested stacks (additional templates) to launch and configure all the resources for the solution in one go.
To deploy the single-step template into your account, choose Launch Stack:
This takes you to the Create stack wizard in the AWS CloudFormation console. The template is launched in the US East (N. Virginia) Region by default. Do not change to a different Region because the template is designed to work only in us-east-1 (N. Virginia).
On the Select Template page, keep the default URL for the AWS CloudFormation template, and then choose Next.
On the Specify Details page, review the parameters for the template. Provide values for the parameters that require input (for more information, see the parameters table that follows).
The following parameters are available in this template.
Parameter
Default
Description
Domain Controller name
DC1
NetBIOS (hostname) name of the Active Directory server. This name can be up to 15 characters long.
Active Directory domain
example.com
Fully qualified domain name (FQDN) of the forest root domain (for example, example.com).
Domain NetBIOS name
EXAMPLE
NetBIOS name of the domain for users of earlier versions of Windows. This name can be up to 15 characters long.
Domain admin user
CrossRealmAdmin
User name for the account that is added as domain administrator. This account is separate from the default administrator account.
Domain admin password
Requires input
Password for the domain admin user. Must be at least eight characters including letters, numbers, and symbols.
Key pair name
Requires input
Name of an existing key pair, which enables you to connect securely to your instance after it launches.
Instance type
m4.xlarge
Instance type for the domain controller and the Amazon EMR cluster.
Allowed IP address
10.0.0.0/16
The client IP address can that can reach your cluster. Specify an IP address range in CIDR notation (for example, 203.0.113.5/32). By default, only the VPC CIDR (10.0.0.0/16) can reach the cluster. Be sure to add your client IP range so that you can connect to the cluster using SSH.
EMR Kerberos realm
EC2.INTERNAL
Cluster’s Kerberos realm name. By default, the realm name is derived from the cluster’s VPC domain name in uppercase letters (for example, EC2.INTERNAL is the default VPC domain name in the us-east-1 Region).
Trusted AD domain
EXAMPLE.COM
The Active Directory (AD) domain that you want to trust. This is the same as the “Active Directory domain.” However, it must use all uppercase letters (for example, EXAMPLE.COM).
Cross-realm trust password
Requires input
Password that you want to use for your cross-realm trust.
Instance count
2
The number of instances (core nodes) for the cluster.
EMR applications
Hadoop, Spark, Ganglia, Hive
Comma separated list of applications to install on the cluster.
After you specify the template details, choose Next. On the Options page, choose Next again. On the Review page, select the I acknowledge that AWS CloudFormation might create IAM resources with custom names check box, and then choose Create.
It takes approximately 45 minutes for the deployment to complete. When the stack launch is complete, it will return outputs with information about the resources that were created. Note the outputs and skip to the Managing and testing the solution section. You can view the stack outputs on the AWS Management Console or by using the following AWS CLI command:
This section describes how to use AWS CloudFormation templates to perform each step separately in the solution.
Create and configure an Amazon VPC
In order for you to establish a cross-realm trust between an Amazon EMR Kerberos realm and an Active Directory domain, your Amazon VPC must meet the following requirements:
The subnet used for the Amazon EMR cluster must have a CIDR block of fewer than nine digits (for example, 10.0.1.0/24).
Both DNS resolution and DNS hostnames must be enabled (set to “yes”).
The Active Directory domain controller must be the DNS server for instances in the Amazon VPC (this is configured in the next step).
To use the AWS CloudFormation template to create and configure an Amazon VPC with the prerequisites listed previously, choose Launch Stack:
Note: If you want to create the VPC manually (without using AWS CloudFormation), see Set Up the VPC and Subnet in the Amazon EMR documentation.
Launching this stack creates the following AWS resources:
Amazon VPC with CIDR block 10.0.0.0/16 (Name: CrossRealmVPC)
Internet Gateway (Name: CrossRealmGateway)
Public subnet with CIDR block 10.0.1.0/24 (Name: CrossRealmSubnet)
Security group allowing inbound access from the VPC’s subnets (Name tag: CrossRealmSecurityGroup)
When the stack launch is complete, it should return outputs similar to the following.
Key
Value example
Description
SubnetID
subnet-xxxxxxxx
The subnet for the Active Directory domain controller and the EMR cluster.
SecurityGroup
sg-xxxxxxxx
The security group for the Active Directory domain controller.
VPCID
vpc-xxxxxxxx
The Active Directory domain controller and EMR cluster will be launched on this VPC.
Note the outputs because they are used in the next step. You can view the stack outputs on the AWS Management Console or by using the following AWS CLI command:
Launch and configure an Active Directory domain controller
In this step, you use an AWS CloudFormation template to automatically launch and configure a new Active Directory domain controller and cross-realm trust.
Note: There are various ways to install and configure an Active Directory domain controller. For details on manually launching and installing a domain controller without AWS CloudFormation, see Step 2: Launch and Install the AD Domain Controller in the Amazon EMR documentation.
In addition to launching and configuring an Active Directory domain controller and cross-realm trust, this AWS CloudFormation template also sets the domain controller as the DNS server (name server) for your Amazon VPC. In other words, the template creates a new DHCP option-set for the VPC where it’s being deployed to, and it sets the private IP of the domain controller as the name server for that new DHCP option set.
IMPORTANT: You should not use this template on a production VPC with existing resources like Amazon EC2 instances. When you launch this stack, make sure that you use the new environment and resources (Amazon VPC, subnet, and security group) that were created in the Create and configure an Amazon VPC step.
To launch this stack, choose Launch Stack:
The following table contains information about the parameters available in this template. Review the parameters for the template and provide values for those that require input.
Parameter
Default
Description
VPC ID
Requires input
Launch the domain controller on this VPC (for example, use the VPC created in the Create and configure an Amazon VPC step).
Subnet ID
Requires input
Subnet used for the domain controller (for example, use the subnet created in the Create and configure an Amazon VPC step).
Security group ID
Requires input
Security group (SG) for the domain controller (for example, use the SG created in the Create and configure an Amazon VPC step).
Domain Controller name
DC1
NetBIOS name of the Active Directory server (up to 15 characters).
Active Directory domain
example.com
Fully qualified domain name (FQDN) of the forest root domain (for example, example.com).
Domain NetBIOS name
EXAMPLE
NetBIOS name of the domain for users of earlier versions of Windows. This name can be up to 15 characters long.
Domain admin user
CrossRealmAdmin
User name for the account that is added as domain administrator. This account is separate from the default administrator account.
Domain admin password
Requires input
Password for the domain admin user. Must be at least eight characters including letters, numbers, and symbols.
Key pair name
Requires input
Name of an existing EC2 key pair to enable access to the domain controller instance.
Instance type
m4.xlarge
Instance type for the domain controller.
EMR Kerberos realm
EC2.INTERNAL
Cluster’s Kerberos realm name. By default, the realm name is derived from the cluster’s VPC domain name in uppercase letters (for example, EC2.INTERNAL is the default VPC domain name in the us-east-1 Region).
Cross-realm trust password
Requires input
Password that you want to use for your cross-realm trust.
It takes 25–30 minutes for this stack to be created. When it’s complete, note the stack’s outputs, and then move to the next step: Launch an EMR cluster with Kerberos enabled.
Create a security configuration and launch an Amazon EMR cluster with Kerberos enabled
To launch a kerberized Amazon EMR cluster, you first need to create a security configuration containing the cross-realm trust configuration. You then specify cluster-specific Kerberos attributes when launching the cluster.
In this step, you use AWS CloudFormation to launch and configure a kerberized Amazon EMR cluster with a cross-realm trust. If you want to manually launch and configure a cluster with Kerberos enabled, see Step 6: Launch a Kerberized EMR Cluster in the Amazon EMR documentation.
Note: At the time of this writing, AWS CloudFormation does not yet support launching Amazon EMR clusters with Kerberos authentication enabled. To overcome this limitation, I created a template that uses an AWS Lambda-backed custom resource to launch and configure the Amazon EMR cluster with Kerberos enabled. If you use this template, there’s nothing else that you need to do. Just keep in mind that the template creates and invokes an AWS Lambda function (custom resource) to launch the cluster.
To create a cross-realm trust security configuration and launch a kerberized Amazon EMR cluster using AWS CloudFormation, choose Launch Stack:
The following table lists and describes the template parameters for deploying a kerberized Amazon EMR cluster and configuring a cross-realm trust.
Parameter
Default
Description
Active Directory domain
example.com
The Active Directory domain that you want to establish the cross-realm trust with.
Domain admin user (joiner user)
CrossRealmAdmin
The user name of an Active Directory domain user with privileges to join domains/computers to the Active Directory domain (joiner user).
Domain admin password
Requires input
Password of the joiner user.
Cross-realm trust password
Requires input
Password of your cross-realm trust.
EC2 key pair name
Requires input
Name of an existing key pair, which enables you to connect securely to your cluster after it launches.
Subnet ID
Requires input
Subnet that you want to use for your Amazon EMR cluster (for example, choose the subnet created in the Create and configure an Amazon VPC step).
Security group ID
Requires input
Security group that you want to use for your Amazon EMR cluster (for example, choose the security group created in the Create and configure an Amazon VPC step).
Instance type
m4.xlarge
The instance type that you want to use for the cluster nodes.
Instance count
2
The number of instances (core nodes) for the cluster.
Allowed IP address
10.0.0.0/16
The client IP address can that can reach your cluster. Specify an IP address range in CIDR notation (for example, 203.0.113.5/32). By default, only the VPC CIDR (10.0.0.0/16) can reach the cluster. Be sure to add your client IP range so that you can connect to the cluster using SSH.
EMR applications
Hadoop, Spark, Ganglia, Hive
Comma separated list of the applications that you want installed on the cluster.
EMR Kerberos realm
EC2.INTERNAL
Cluster’s Kerberos realm name. By default, the realm name is derived from the cluster’s VPC domain name in uppercase letters (for example, EC2.INTERNAL is the default VPC domain name in the us-east-1 Region).
Trusted AD domain
EXAMPLE.COM
The Active Directory domain that you want to trust. This name is the same as the “AD domain name.” However, it must use all uppercase letters (for example, EXAMPLE.COM).
It takes 10–15 minutes for this stack to be created. When it’s complete, note the stack’s outputs, and then move to the next section: Managing and testing the solution.
Managing and testing the solution
Now that you’ve configured and built the solution, it’s time to test it by connecting to a cluster using Active Directory credentials.
SSH to a cluster using Active Directory credentials (single sign-on)
After you launch a kerberized Amazon EMR cluster, if you used the AWS CloudFormation templates and added your client IP range to the Allowed IP address parameter, you should be able to connect to the cluster using an SSH client and your Active Directory user credentials. If you have trouble connecting to the cluster using SSH, check the cluster’s security group to make sure that it allows inbound SSH connection (TCP port 22) from your client’s IP address (source).
The following steps assume that you’re using a client such as OpenSSH. If you’re using a different SSH application (for example, PuTTY), consult the application-specific documentation.
Note: Because the cluster was launched with a cross-realm trust configuration, you don’t need to use a private key (.pem file) when you connect to it as a domain user using SSH.
To connect to your Amazon EMR cluster as an Active Directory user using SSH, run the following command. Replace ad_user with the domain admin user that you created while setting up the domain controller and replace master_node_URL with the cluster’s URL (see the stack’s outputs to find this information):
$ ssh -l <ad_user> <master_node_URL>
If your SSH client is configured to use a key as the preferred authentication method, the login might fail. If that’s the case, you can add the following options to your SSH command to force the SSH connection to use password authentication:
After a domain user connects to the cluster using SSH, if this is the first that the user is connecting, a local home directory is created for that user. In addition to creating a local home directory, if you used the create-hdfs-home-ba.sh bootstrap action when launching the cluster (done by default if you used the AWS CloudFormation template to launch a kerberized cluster), an HDFS user home directory is also automatically created.
Note: If you manually launched the cluster and did not use the create-hdfs-home-ba.sh bootstrap action, then you’ll need to manually create HDFS user home directories for your users.
When you connect to the cluster using SSH for the first time (as a domain user), you should see the following messages if the HDFS home directory for your domain user was successfully created:
Running jobs on a kerberized Amazon EMR cluster
To run a job on a kerberized cluster, the user submitting the job must first be authenticated. If you followed the previous section to connect to your cluster as an Active Directory user using SSH, the user should be authenticated automatically.
If running the klist command returns a “No credentials cache found” message, it means that the user is not authenticated (the user doesn’t have a Kerberos ticket). You can re-authenticate a user at any time by running the following command (be sure to use all uppercase letters for the Active Directory domain):
$ kinit <username>@<AD_DOMAIN>
When the user is authenticated, they can submit jobs just like they would on a non-kerberized cluster.
Auditing jobs
Another advantage that Kerberos can provide is that you can easily tell which user ran a particular job. For example, connect (using SSH) to a kerberized cluster with an Active Directory user, and submit the SparkPi sample application:
$ spark-example SparkPi
After running the SparkPi application, go to the Amazon EMR console and choose your cluster. Then choose the Application history tab. There you can see information about the application, including the user that submitted the job:
Common issues
Although it would be hard to cover every possible Kerberos issue, this section covers some of the more common issues that might occur and ways to fix them.
Issue 1: You can successfully connect and get authenticated on a cluster. However, whenever you try running job, it fails with an error similar to the following:
Solution: Make sure that an HDFS home directory for the user was created and that it has the right permissions.
Issue 2: You can successfully connect to the cluster, but you can’t run any Hadoop or HDFS commands.
Solution: Use the klist command to confirm whether the user is authenticated and has a valid Kerberos ticket. Use the kinit command to re-authenticate a user.
Issue 3: You can’t connect (using SSH) to the cluster using Active Directory user credentials, but you can manually authenticate the user with kinit.
Solution: Make sure that the Active Directory domain controller is the DNS server (name server) for the cluster nodes.
Cleaning up
After completing and testing this solution, remember to clean up the resources. If you used the AWS CloudFormation templates to create the resources, then use the AWS CloudFormation console or AWS CLI/SDK to delete the stacks. Deleting a stack also deletes the resources created by that stack.
If one of your stacks does not delete, make sure that there are no dependencies on the resources created by that stack. For example, if you deployed an Amazon VPC using AWS CloudFormation and then deployed a domain controller into that VPC using a different AWS CloudFormation stack, you must first delete the domain controller stack before the VPC stack can be deleted.
Summary
The ability to authenticate users and services with Kerberos not only allows you to secure your big data applications, but it also enables you to easily integrate Amazon EMR clusters with an Active Directory environment. This post showed how you can use Kerberos on Amazon EMR to create a single sign-on solution where Active Directory domain users can seamlessly access Amazon EMR clusters and run big data applications. We also showed how you can use AWS CloudFormation to automate the deployment of this solution.
Lasers! Cookies! Raspberry Pi! We’re buzzing with excitement about sharing our latest YouTube video with you, which comes directly from the kitchen of maker Estefannie Explains It All!
When Estefannie visited Pi Towers earlier this year, we introduced her to the Raspberry Pi Digital Curriculum and the free resources on our website. We’d already chatted to her via email about the idea of creating a collab video for the Raspberry Pi channel. Once she’d met members of the Raspberry Pi Foundation team and listened to them wax lyrical about the work we do here, she was even more keen to collaborate with us.
Ahhhh!!! I still can’t believe I got to hang out and make stuff at the @Raspberry_Pi towers!! Thank you thank you!!
Estefannie returned to the US filled with inspiration for a video for our channel, and we’re so pleased with how awesome her final result is. The video is a super addition to our Raspberry Pi YouTube channel, it shows what our resources can help you achieve, and it’s great fun. You might also have noticed that the project fits in perfectly with this season’s Pioneers challenge. A win all around!
So yeah, we’re really chuffed about this video, and we hope you all like it too!
Estefannie’s Laser Cookies guide
For those of you wanting to try your hand at building your own Cookie Jar Laser Surveillance Security System, Estefannie has provided a complete guide to talk you through it. Here she goes:
First off, you’ll need:
10 lasers
10 photoresistors
10 capacitors
1 Raspberry Pi Zero W
1 buzzer
1 Raspberry Pi Camera Module
12 ft PVC pipes + 4 corners
1 acrylic panel
1 battery pack
8 zip ties
tons of cookies
I used the Raspberry Pi Foundation’s Laser trip wire and the Tweeting Babbage resources to get one laser working and to set up the camera and Twitter API. This took me less than an hour, and it was easy, breezy, beautiful, Raspberry Pi.
I soldered ten lasers in parallel and connected ten photoresistors to their own GPIO pins. I didn’t wire them up in series because of sensitivity reasons and to make debugging easier.
Building the frame took a few tries: I actually started with a wood frame, then tried a clear case, and finally realized the best and cleaner solution would be pipes. All the wires go inside the pipes and come out in a small window on the top to wire up to the Zero W.
Using pipes also made the build cheaper, since they were about $3 for 12 ft. Wiring inside the pipes was tricky, and to finish the circuit, I soldered some of the wires after they were already in the pipes.
I tried glueing the lasers to the frame, but the lasers melted the glue and became decalibrated. Next I tried tape, and then I found picture mounting putty. The putty worked perfectly — it was easy to mold a putty base for the lasers and to calibrate and re-calibrate them if needed. Moreover, the lasers stayed in place no matter how hot they got.
Although the lasers were not very strong, I still strained my eyes after long hours of calibrating — hence the sunglasses! Working indoors with lasers, sunglasses, and code was weird. But now I can say I’ve done that…in my kitchen.
Using all the knowledge I have shared, this project should take a couple of hours. The code you need lives on my GitHub!
“The cookie recipe is my grandma’s, and I am not allowed to share it.”
Estefannie on YouTube
Estefannie made this video for us as a gift, and we’re so grateful for the time and effort she put into it! If you enjoyed it and would like to also show your gratitude, subscribe to her channel on YouTube and follow her on Instagram and Twitter. And if you make something similar, or build anything with our free resources, make sure to share it with us in the comments below or via our social media channels.
Security updates have been issued by CentOS (freeradius), Debian (memcached), Fedora (irssi and putty), openSUSE (catdoc), Red Hat (collectd), and Ubuntu (expat, openldap, spice, and tiff).
As I explained in my previous Security Blog post, a hardware security module (HSM) is a hardware device designed with the security of your data and cryptographic key material in mind. It is tamper-resistant hardware that prevents unauthorized users from attempting to pry open the device, plug in any extra devices to access data or keys such as subtokens, or damage the outside housing. The HSM device AWS CloudHSM offers is the Luna SA 7000 (also called Safenet Network HSM 7000), which is created by Gemalto. Depending on the firmware version you install, many of the security properties of these HSMs will have been validated under Federal Information Processing Standard (FIPS) 140-2, a standard issued by the National Institute of Standards and Technology (NIST) for cryptography modules. These standards are in place to protect the integrity and confidentiality of the data stored on cryptographic modules.
To help ensure its continued use, functionality, and support from AWS, we suggest that you update your AWS CloudHSM device software and firmware as well as the client instance software to current versions offered by AWS. As of the publication of this blog post, the current non-FIPS-validated versions are 5.4.9/client, 5.3.13/software, and 6.20.2/firmware, and the current FIPS-validated versions are 5.4.9/client, 5.3.13/software, and 6.10.9/firmware. (The firmware version determines FIPS validation.) It is important to know your current versions before updating so that you can follow the correct update path.
In this post, I demonstrate how to update your current CloudHSM devices and client instances so that you are using the most current versions of software and firmware. If you contact AWS Support for CloudHSM hardware and application issues, you will be required to update to these supported versions before proceeding. Also, any newly provisioned CloudHSM devices will use these supported software and firmware versions only, and AWS does not offer “downgrade” options.
Note: Before you perform any updates, check with your local CloudHSM administrator and application developer to verify that these updates will not conflict with your current applications or architecture.
Overview of the update process
To update your client and CloudHSM devices, you must use both update paths offered by AWS. The first path involves updating the software on your client instance, also known as a control instance. Following the second path updates the software first and then the firmware on your CloudHSM devices. The CloudHSM software must be updated before the firmware because of the firmware’s dependencies on the software in order to work appropriately.
As I demonstrate in this post, the correct update order is:
Updating your client instance
Updating your CloudHSM software
Updating your CloudHSM firmware
To update your client instance, you must have the private SSH key you created when you first set up your client environment. This key is used to connect via SSH protocol on port 22 of your client instance. If you have more than one client instance, you must repeat this connection and update process on each of them. The following diagram shows the flow of an SSH connection from your local network to your client instances in the AWS Cloud.
After you update your client instance to the most recent software (5.3.13), you then must update the CloudHSM device software and firmware. First, you must initiate an SSH connection from any one client instance to each CloudHSM device, as illustrated in the following diagram. A successful SSH connection will have you land at the Luna shell, denoted by lunash:>. Second, you must be able to initiate a Secure Copy (SCP) of files to each device from the client instance. Because the software and firmware updates require an elevated level of privilege, you must have the Security Officer (SO) password that you created when you initialized your CloudHSM devices.
After you have completed all updates, you can receive enhanced troubleshooting assistance from AWS, if you need it. When new versions of software and firmware are released, AWS performs extensive testing to ensure your smooth transition from version to version.
Detailed guidance for updating your client instance, CloudHSM software, and CloudHSM firmware
1. Updating your client instance
Let’s start by updating your client instances. My client instance and CloudHSM devices are in the eu-west-1 region, but these steps work the same in any AWS region. Because Gemalto offers client instances in both Linux and Windows, I will cover steps to update both. I will start with Linux. Please note that all commands should be run as the “root” user.
Updating the Linux client
SSH from your local network into the client instance. You can do this from Linux or Windows. Typically, you would do this from the directory where you have stored your private SSH key by using a command like the following command in a terminal or PuTTY This initiates the SSH connection by pointing to the path of your SSH key and denoting the user name and IP address of your client instance.
After the SSH connection is established, you must stop all applications and services on the instance that are using the CloudHSM device. This is required because you are removing old software and installing new software in its place. After you have stopped all applications and services, you can move on to remove the existing version of the client software.
/usr/safenet/lunaclient/bin/uninstall.sh
This command will remove the old client software, but will not remove your configuration file or certificates. These will be saved in the Chrystoki.conf file of your /etc directory and your usr/safenet/lunaclient/cert directory. Do not delete these files because you will lose the configuration of your CloudHSM devices and client connections.
Download the new client software package: cloudhsm-safenet-client. Double-click it to extract the archive.
SafeNet-Luna-client-5-4-9/linux/64/install.sh
Make sure you choose the Luna SA option when presented with it. Because the directory where your certificates are installed is the same, you do not need to copy these certificates to another directory. You do, however, need to ensure that the Chrystoki.conf file, located at /etc/Chrystoki.conf, has the same path and name for the certificates as when you created them. (The path or names should not have changed, but you should still verify they are same as before the update.)
Check to ensure that the PATH environment variable points to the directory, /usr/safenet/lunaclient/bin, to ensure no issues when you restart applications and services. The update process for your Linux client Instance is now complete.
Updating the Windows client
Use the following steps to update your Windows client instances:
After you establish the RDP connection, stop all applications and services on the instance that are using the CloudHSM device. This is required because you will remove old software and install new software in its place or overwrite If your client software version is older than 5.4.1, you need to completely remove it and all patches by using Programs and Features in the Windows Control Panel. If your client software version is 5.4.1 or newer, proceed without removing the software. Your configuration file will remain intact in the crystoki.ini file of your C:\Program Files\SafeNet\Lunaclient\ directory. All certificates are preserved in the C:\Program Files\SafeNet\Lunaclient\cert\ directory. Again, do not delete these files, or you will lose all configuration and client connection data.
After you have completed these steps, download the new client software: cloudhsm-safenet-client. Extract the archive from the downloaded file, and launch the SafeNet-Luna-client-5-4-9\win\64\Lunaclient.msi Choose the Luna SA option when it is presented to you. Because the directory where your certificates are installed is the same, you do not need to copy these certificates to another directory. You do, however, need to ensure that the crystoki.ini file, which is located at C:\Program Files\SafeNet\Lunaclient\crystoki.ini, has the same path and name for the certificates as when you created them. (The path and names should not have changed, but you should still verify they are same as before the update.)
Make one last check to ensure the PATH environment variable points to the directory C:\Program Files\SafeNet\Lunaclient\ to help ensure no issues when you restart applications and services. The update process for your Windows client instance is now complete.
2. Updating your CloudHSM software
Now that your clients are up to date with the most current software version, it’s time to move on to your CloudHSM devices. A few important notes:
Back up your data to a Luna SA Backup device. AWS does not sell or support the Luna SA Backup devices, but you can purchase them from Gemalto. We do, however, offer the steps to back up your data to a Luna SA Backup device. Do not update your CloudHSM devices without backing up your data first.
If the names of your clients used for Network Trust Link Service (NTLS) connections has a capital “T” as the eighth character, the client will not work after this update. This is because of a Gemalto naming convention. Before upgrading, ensure you modify your client names accordingly. The NTLS connection uses a two-way digital certificate authentication and SSL data encryption to protect sensitive data transmitted between your CloudHSM device and the client Instances.
The syslog configuration for the CloudHSM devices will be lost. After the update is complete, notify AWS Support and we will update the configuration for you.
Now on to updating the software versions. There are actually three different update paths to follow, and I will cover each. Depending on the current software versions on your CloudHSM devices, you might need to follow all three or just one.
Updating the software from version 5.1.x to 5.1.5
If you are running any version of the software older than 5.1.5, you must first update to version 5.1.5 before proceeding. To update to version 5.1.5:
Stop all applications and services that access the CloudHSM device.
<private_key_file> is the private portion of your SSH key pair and <hsm_ip_address> is the IP address of your CloudHSM elastic network interface (ENI). The ENI is the network endpoint that permits access to your CloudHSM device. The IP address was supplied to you when the CloudHSM device was provisioned.
Use the following command to connect to your CloudHSM device and log in with your Security Officer (SO) password.
The value you will use for <auth_code> is contained in the lunasa_update-5.1.5-2.auth file found in the 630-010165-018_reva.tar archive you downloaded in Step 2.
Reboot the CloudHSM device by running the following command.
lunash:> sysconf appliance reboot
When all the steps in this section are completed, you will have updated your CloudHSM software to version 5.1.5. You can now move on to update to version 5.3.10.
Updating the software to version 5.3.10
You can update to version 5.3.10 only if you are currently running version 5.1.5. To update to version 5.3.10:
Stop all applications and services that access the CloudHSM device.
The value you will use for <auth_code> is contained in the lunasa_update-5.3.10-7.auth file found in the SafeNet-Luna-SA-5-3-10.zip archive you downloaded in Step 2.
Reboot the CloudHSM device by running the following command.
lunash:> sysconf appliance reboot
When all the steps in this section are completed, you will have updated your CloudHSM software to version 5.3.10. You can now move on to update to version 5.3.13.
Note: Do not configure your applog settings at this point; you must first update the software to version 5.3.13 in the following step.
Updating the software to version 5.3.13
You can update to version 5.3.13 only if you are currently running version 5.3.10. If you are not already running version 5.3.10, follow the two update paths mentioned previously in this section.
To update to version 5.3.13:
Stop all applications and services that access the CloudHSM device.
The value you will use for <auth_code> is contained in the lunasa_update-5.3.13-1.auth file found in the SafeNet-Luna-SA-5-3-13.zip archive that you downloaded in Step 2.
When updating to this software version, the option to update the firmware also is offered. If you do not require a version of the firmware validated under FIPS 140-2, accept the firmware update to version 6.20.2. If you do require a version of the firmware validated under FIPS 140-2, do not accept the firmware update and instead update by using the steps in the next section, “Updating your CloudHSM FIPS 140-2 validated firmware.”
After updating the CloudHSM device, reboot it by running the following command.
lunash:> sysconf appliance reboot
Disable NTLS IP checking on the CloudHSM device so that it can operate within its VPC. To do this, run the following command.
lunash:> ntls ipcheck disable
When all the steps in this section are completed, you will have updated your CloudHSM software to version 5.3.13. If you don’t need the FIPS 140-2 validated firmware, you will have also updated the firmware to version 6.20.2. If you do need the FIPS 140-2 validated firmware, proceed to the next section.
3. Updating your CloudHSM FIPS 140-2 validated firmware
To update the FIPS 140-2 validated version of the firmware to 6.10.9, use the following steps:
The value you will use for <auth_code> is contained in the 630-010430-010_SPKG_LunaFW_6.10.9.auth file found in the 630-010430-010_SPKG_LunaFW_6.10.9.zip archive that you downloaded in Step 1.
Run the following command to update the firmware of the CloudHSM devices.
lunash:> hsm update firmware
After you have updated the firmware, reboot the CloudHSM devices to complete the installation.
lunash:> sysconf appliance reboot
Summary
In this blog post, I walked you through how to update your existing CloudHSM devices and clients so that they are using supported client, software, and firmware versions. Per AWS Support and CloudHSM Terms and Conditions, your devices and clients must use the most current supported software and firmware for continued troubleshooting assistance. Software and firmware versions regularly change based on customer use cases and requirements. Because AWS tests and validates all updates from Gemalto, you must install all updates for firmware and software by using our package links described in this post and elsewhere in our documentation.
If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about implementing this solution, please start a new thread on the CloudHSM forum.
Security updates have been issued by Arch Linux (firefox, mbedtls, and wordpress), CentOS (firefox, openjpeg, and tomcat6), Debian (deluge, ioquake3, r-base, and wireshark), Fedora (qemu, rabbitmq-server, and sscg), Gentoo (adobe-flash, openoffice-bin, and putty), openSUSE (Chromium, irssi, putty, and roundcubemail), Oracle (firefox and openjpeg), Red Hat (firefox and openjpeg), Scientific Linux (firefox and openjpeg), and SUSE (firefox).
Gone, it would seem, are the days of ‘Hello, My name is…’ stickers and Sharpies. Who wants a simple sticker on their chest, so flat and dull, when they can wear an entire computer, displaying their name and face in pixelated perfection?
I created this video with the YouTube Video Editor (http://www.youtube.com/editor)
With this PiE-Ink Name Badge, maker Josh King has taken this simple means of identification and upgraded it. And in his Instructables tutorial, he explains exactly how. But here’s the TL;DR for those wanting to get the basic gist of the build.
For the badge, Josh uses a Raspberry Pi Zero, a PaPiRus 2″ e-ink HAT, an Adafruit Powerboost 1000c, and a LiPo battery. He also uses various other components, such as magnets and adhesive putty.
Josh prepped the Zero, soldering the header pins in place, and then attached the Powerboost, allowing the LiPo battery to power the unit and be charged at the same time.
From there, he attaches the PaPiRus HAT and secures the whole thing with the putty, to ensure a snug fit. He also attaches a mini slide switch to allow an on/off function.
Having pre-installed Raspbian on the SD card, Josh follows the setup for the PaPiRus, ensuring all library information is in place and that the Pi recognises the 2″ screen. The code for the badge can then be downloaded directly from Josh’s GitHub account. You’ll need to scale your image down to 200×96 in order for it to fit on the e-ink screen.
And there you have it. One Raspberry Pi Zero e-ink name badge, ready for you to show off at the next work function, conference, or when you visit Grandma and she still can’t get your name right.
A hardware security module (HSM) is a hardware device designed with the security of your data and cryptographic key material in mind. It is tamper-resistant hardware that prevents unauthorized users from attempting to pry open the device, plug any extra devices in to access data or keys such as subtokens, or damage the outside housing. If any such interference occurs, the device wipes all information stored so that unauthorized parties do not gain access to your data or cryptographic key material. A high-availability (HA) setup could be beneficial because, with multiple HSMs kept in different data centers and all data synced between them, the loss of one HSM does not mean the loss of your data.
In this post, I will walk you through steps to remove single points of failure in your AWS CloudHSM environment by setting up an HA partition group. Single points of failure occur when a single CloudHSM device fails in a non-HA configuration, which can result in the permanent loss of keys and data. The HA partition group, however, allows for one or more CloudHSM devices to fail, while still keeping your environment operational.
Prerequisites
You will need a few things to build your HA partition group with CloudHSM:
2 CloudHSM devices. AWS offers a free two-week trial. AWS will provision the trial for you and send you the CloudHSM information such as the Elastic Network Interface (ENI) and the private IP address assigned to the CloudHSM device so that you may begin testing. If you have used CloudHSM before, another trial cannot be provisioned, but you can set up production CloudHSM devices on your own. See Provisioning Your HSMs.
A client instance from which to access your CloudHSM devices. You can create this manually, or via an AWS CloudFormation template. You can connect to this instance in your public subnet, and then it can communicate with the CloudHSM devices in your private subnets.
An HA partition group, which ensures the syncing and load balancing of all CloudHSM devices you have created.
The CloudHSM setup process takes about 30 minutes from beginning to end. By the end of this how-to post, you should be able to set up multiple CloudHSM devices and an HA partition group in AWS with ease. Keep in mind that each production CloudHSM device you provision comes with an up-front fee of $5,000. You are not charged for any CloudHSM devices provisioned for a trial, unless you decide to move them to production when the trial ends.
If you decide to move your provisioned devices to your production environment, you will be billed $5,000 per device. If you decide to stop the trial so as not to be charged, you have up to 24 hours after the trial ends to let AWS Support know of your decision.
Solution overview
How HA works
HA is a feature of the Luna SA 7000 HSM hardware device AWS uses for its CloudHSM service. (Luna SA 7000 HSM is also known as the “SafeNet Network HSM” in more recent SafeNet documentation. Because AWS documentation refers to this hardware as “Luna SA 7000 HSM,” I will use this same product name in this post.) This feature allows more than one CloudHSM device to be placed as members in a load-balanced group setup. By having more than one device on which all cryptographic material is stored, you remove any single points of failure in your environment.
You access your CloudHSM devices in this HA partition group through one logical endpoint, which distributes traffic to the CloudHSM devices that are members of this group in a load-balanced fashion. Even though traffic is balanced between the HA partition group members, any new data or changes in data that occur on any CloudHSM device will be mirrored for continuity to the other members of the HA partition group. A single HA partition group is logically represented by a slot, which is physically composed of multiple partitions distributed across all HA nodes. Traffic is sent through the HA partition group, and then distributed to the partitions that are linked. All partitions are then synced so that data is persistent on each one identically.
The following diagram illustrates the HA partition group functionality.
Application servers send traffic to your HA partition group endpoint.
The HA partition group takes all requests and distributes them evenly between the CloudHSM devices that are members of the HA partition group.
Each CloudHSM device mirrors itself to each other member of the HA partition group to ensure data integrity and availability.
Automatic recovery
If you ever lose data, you want a hands-off, quick recovery. Before autoRecovery was introduced, you could take advantage of the redundancy and performance HA partition groups offer, but you were still required to manually intervene when a group member was lost.
HA partition group members may fail for a number of reasons, including:
Loss of power to a CloudHSM device.
Loss of network connectivity to a CloudHSM device. If network connectivity is lost, it will be seen as a failed device and recovery attempts will be made.
Recovery of partition group members will only work if the following are true:
HA autoRecovery is enabled.
There are at least two nodes (CloudHSM devices) in the HA partition group.
Connectivity is established at startup.
The recover retry limit is not reached (if reached or exceeded, the only option is manual recovery).
HA autoRecovery is not enabled by default and must be explicitly enabled by running the following command, which is found in Enabling Automatic Recovery.
>vtl haAdmin –autoRecovery –retry <count>
When enabling autoRecovery, set the –retry and –interval parameters. The –retry parameter can be a value between 0 and 500 (or -1 for infinite retries), and equals the number of times the CloudHSM device will attempt automatic recovery. The –interval parameter is in seconds and can be any value between 60 and 1200. This is the amount of time between automatic recovery tries that the CloudHSM will attempt.
Setting up two Production CloudHSM devices in AWS
Now that I have discussed how HA partition groups work and why they are useful, I will show how to set up your CloudHSM environment and the HA partition group itself. To create an HA partition group environment, you need a minimum of two CloudHSM devices. You can have as many as 16 CloudHSM devices associated with an HA partition group at any given time. These must be associated with the same account and region, but can be spread across multiple Availability Zones, which is the ideal setup for an HA partition group. Automatic recovery is great for larger HA partition groups because it allows the devices to quickly attempt recovery and resync data in the event of a failure, without requiring manual intervention.
Set up the CloudHSM environment
To set up the CloudHSM environment, you must have a few things already in place:
At least one public subnet and two private subnets.
An Amazon EC2 client instance (m1.small running Amazon Linux x86 64-bit) in the public subnet, with the SafeNet client software already installed. This instance uses the key pair that you specified during creation of the CloudFormation stack. You can find a ready-to-use Amazon Machine Image (AMI) in our Community AMIs. Simply log into the EC2 console, choose Launch Instance, click Community AMIs, and search for CloudHSM. Because we regularly release new AMIs with software updates, searching for CloudHSM will show all available AMIs for a region. Select the AMI with the most recent client version.
Two security groups, one for the client instance and one for the CloudHSM devices. The security group for the client instance, which resides in the public subnet, will allow SSH on port 22 from your local network. The security group for the CloudHSM devices, which resides in the private subnet, will allow SSH on port 22 and NTLS on port 1792 from your public subnet. These will both be ingress rules (egress rules allow all traffic).
An Elastic IP address for the client instance.
An IAM role that delegates AWS resource access to CloudHSM. You can create this role in the IAM console:
Click Roles and then click Create New Role.
Type a name for the role and then click Next Step.
Under AWS Service Roles, click Select next to AWS CloudHSM.
In the Attach Policy step, select AWSCloudHSMRole as the policy. Click Next Step.
Click Create Role.
We have a CloudFormation template available that will set up the CloudHSM environment for you:
Choose Create Stack. Specify https://cloudhsm.s3.amazonaws.com/cloudhsm-quickstart.json as the Amazon S3 template URL.
On the next two pages, specify parameters such as the Stack name, SSH Key Pair, Tags, and SNS Topic for alerts. You will find SNS Topic under the Advanced arrow. Then, click Create.
When the new stack is in the CREATION_COMPLETE state, you will have the IAM role to be used for provisioning your CloudHSM devices, the private and public subnets, your client instance with Elastic IP (EIP), and the security groups for both the CloudHSM devices and the client instance. The CloudHSM security group will already have its necessary rules in place to permit SSH and NTLS access from your public subnet; however, you still must add the rules to the client instance’s security group to permit SSH access from your allowed IPs. To do this:
In the VPC console, make sure you select the same region as the region in which your HSM VPC resides.
Select the security group in your HSM VPC that will be used for the client instance.
Add an inbound rule that allows TCP traffic on port 22 (SSH) from your local network IP addresses.
On the Inbound tab, from the Create a new rule list, select SSH, and enter the IP address range of the local network from which you will connect to your client instance.
Click Add Rule, and then click Apply Rule Changes.
After adding the IP rules for SSH (port 22) to your client instance’s security group, test the connection by attempting to make a SSH connection locally to your client instance EIP. Make sure to write down all the subnet and role information, because you will need this later.
Create an SSH key pair
The SSH key pair that you will now create will be used by CloudHSM devices to authenticate the manager account when connecting from your client instance. The manager account is simply the user that is permitted to SSH to your CloudHSM devices. Before provisioning the CloudHSM devices, you create the SSH key pair so that you can provide the public key to the CloudHSM during setup. The private key remains on your client instance to complete the authentication process. You can generate the key pair on any computer, as long as you ensure the client instance has the private key copied to it. You can also create the key pair on Linux or Windows. I go over both processes in this section of this post.
In Linux, you will use the ssh-keygen command. By typing just this command into the terminal window, you will receive output similar to the following.
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.
The key fingerprint is: df:c4:49:e9:fe:8e:7b:eb:28:d5:1f:72:82:fb:f2:69
The key's randomart image is:
+--[ RSA 2048]----+
| |
| . |
| o |
| + . |
| S *. |
| . =.o.o |
| ..+ +..|
| .o Eo .|
| .OO=. |
+-----------------+
$
Start PuTTYgen. For Type of key to generate, select SSH-2 RSA.
In the Number of bits in a generated key field, specify 2048.
Click Generate.
Move your mouse pointer around in the blank area of the Key section below the progress bar (to generate some randomness) until the progress bar is full.
A private/public key pair has now been generated.
In the Key comment field, type a name for the key pair that you will remember.
Click Save public key and name your file.
Click Save private key and name your file. It is imperative that you do not lose this key, so make sure to store it somewhere safe.
Right-click the text field labeled Public key for pasting into OpenSSH authorized_keys file and choose Select All.
Right-click again in the same text field and choose Copy.
The following screenshot shows what the PuTTYgen output will look like after you have created the key pair.
The public key will be used to provision the CloudHSM device and the private key will be stored on the client instance to authenticate the SSH sessions. The public SSH key will look something like the following. If it does not, it is not in the correct format and must be converted using the preceding procedure.
Whether you are saving the private key on your local computer or moving it to the client instance, you must ensure that the file permissions are correct. You can do this by running the following commands (throughout this post, be sure to replace placeholder contentwith your own values). The first command sets the necessary permissions; the second command adds the private key to the authentication agent.
Now that you have your SSH key pair ready, you can set up the AWS CLI tools so that you may provision and manage your CloudHSM devices. If you used the CloudFormation template or the CloudHSM AMI to set up your client instance, you already have the CLI installed. You can check this by running at the command prompt: CloudHSM version. The resulting output should be “Version”: “3.0.5”. If you chose to use your own AMI and install the Luna SA software, you can install the CloudHSM CLI Tools with the following steps. The current version in use is 3.0.5.
You must also set up file and directory ownership for your user on the client instance and the Chrystoki.conf file. The Chrystoki.conf file is the configuration file for the CloudHSM device. By default, CloudHSM devices come ready from the factory for immediate housing of cryptographic keys and performing cryptographic processes on data, but must be configured to connect to your client instances:
On the client instance, set the owner and write permission on the Chrystoki.conf file.
The <owner>can be either the user or a group the user belongs to (for example, ec2-user).
On the client instance set the owner of the Luna client directory:
$ sudo chown <owner> -R <luna_client_dir>
The <owner> should be the same as the <owner> of the Chrystoki.conf file. The <luna_client_dir> differs based on the version of the LunaSA client software installed. If these are new setups, use version 5.3 or newer; however, if you have older clients with version 5.1 installed, use version 5.1:
Client software version 5.3: /usr/safenet/lunaclient/
Client software version 5.1: /usr/lunasa/
You also must configure the AWS CLI tools with AWS credentials to use for the API calls. These can be set by config files, passing the credentials in the commands, or by instance profile. The most secure option, which eliminates the need to hard-code credentials in a config file, is to use an instance profile on your client instance. All CLI commands in this post are performed on a client instance launched with a IAM role that has CloudHSM permissions. If you want to set your credentials in a config file instead, you must remember that each CLI command should include –profile <profilename>, with <profilename> being the name you assigned in the config file for these credentials.See Configuring the AWS CloudHSM CLI Tools for help with setting up the AWS CLI tools.
You will then set up a persistent SSH tunnel for all CloudHSM devices to communicate with the client instance. This is done by editing the ~/.ssh/config file. Replace <CloudHSM_ip_address> with the private IP of your CloudHSM device, and replace <private_key_file> with the file location of your SSH private key created earlier (for example, /home/user/.ssh/id_rsa).
Host <CloudHSM_ip_address>
User manager
IdentityFile <private_key_file>
Also necessary for the client instance to authenticate with the CloudHSM partitions or partition group are client certificates. Depending on the LunaSA client software you are using, the location of these files can differ. Again, if these are new setups, use version 5.3 or newer; however, if you have older clients with version 5.1 installed, use version 5.1:
Linux clients
Client software version 5.3: /usr/safenet/lunaclient/cert
Client software version 5.1: /usr/lunasa/cert
Windows clients
Client software version 5.3: %ProgramFiles%\SafeNet\LunaClient\cert
Client software version 5.1: %ProgramFiles%\LunaSA\cert
To create the client certificates, you can use the OpenSSL Toolkit or the LunaSA client-side vtl commands. The OpenSSL Toolkit is a program that allows you to manage TLS (Transport Layer Security) and SSL (Secure Sockets Layer) protocols. It is commonly used to create SSL certificates for secure communication between internal network devices. The LunaSA client-side vtl commands are installed on your client instance along with the Luna software. If you used either CloudFormation or the CloudHSM AMI, the vtl commands are already installed for you. If you chose to launch a different AMI, you can download the Luna software. After you download the software, run the command linux/64/install.sh as root on a Linux instance and install the Luna SA option. If you install the softtware on a Windows instance, run the command windows\64\LunaClient.msi to install the Luna SA option. I show certificate creation in both OpenSSL Toolkit and LunaSA in the following section.
The <luna_client_cert_dir> is the LunaSA Client certificate directory on the client and the <client_name> can be whatever you choose.
LunaSA
$ sudo vtl createCert –n <client_name>
The output of the preceding LunaSA command will be similar to the following.
Private Key created and written to:
<luna_client_cert_dir>/<client_name>Key.pem
Certificate created and written to:
<luna_client_cert_dir>/<client_name>.pem
You will need these key file locations later on, so make sure you write them down or save them to a file. One last thing to do at this point is create the client Amazon Resource Name (ARN), which you do by running the following command.
Also write down in a safe location the client ARN because you will need it when registering your client instances to the HA partition group.
Provision your CloudHSM devices
Now for the fun and expensive part. Always remember that for each CloudHSM device you provision to your production environment, there is an upfront fee of $5,000. Because you need more than one CloudHSM device to set up an HA partition group, provisioning two CloudHSM devices to production will cost an upfront fee of $10,000.
If this is your first time trying out CloudHSM, you can have a two-week trial provisioned for you at no cost. The only cost will occur if you decide to keep your CloudHSM devices and move them into production. If you are unsure of the usage in your company, I highly suggest doing a trial first. You can open a support case requesting a trial at any time. You must have a paid support plan to request a CloudHSM trial.
To provision the two CloudHSM devices, SSH into your client instance and run the following CLI command.
Make note of each CloudHSM ARN because you will need them to initialize the CloudHSM devices and later add them to the HA partition group.
Initialize the CloudHSM devices
Configuring your CloudHSM devices, or initializing them as the process is formally called, is what allows you to set up the configuration files, certificate files, and passwords on the CloudHSM itself. Because you already have your CloudHSM ARNs from the previous section, you can run the describe-hsm command to get the EniId and the EniIp of the CloudHSM devices. Your results should be similar to the following.
Now that you know the EniId of each CloudHSM, you need to apply the CloudHSM security group to them. This ensures that connection can occur from any instance with the client security group assigned. When a trial is provisioned for you, or you provision CloudHSM devices yourself, the default security group of the VPC is automatically assigned to the ENI.You must change this to the security group that permits ingress ports 22 and 1792 from your client instance.
To apply a CloudHSM security group to an EniId:
Go to the EC2 console, and choose Network Interfaces in the left pane.
Select the EniId of the CloudHSM.
From the Actions drop-down list, choose Change Security Groups. Choose the security group for your CloudHSM device, and then click Save.
To complete the initialization process, you must ensure a persistent SSH connection is in place from your client to the CloudHSM. Remember the ~/.ssh/config file you edited earlier? Now that you have the IP address of the CloudHSM devices and the location of the private SSH key file, go back and fill in that config file’s parameters by using your favorite text editor.
Now, initialize using the initialize-hsm command with the information you gathered from the provisioning steps. The values in red in the following example are meant as placeholders for your own naming and password conventions, and should be replaced with your information during the initialization of the CloudHSM devices.
The <label> is a unique name for the CloudHSM device that should be easy to remember. You can also use this name as a descriptive label that tells what the CloudHSM device is for. The <cloning_domain> is a secret used to control cloning of key material from one CloudHSM to another. This can be any unique name that fits your company’s naming conventions. Examples could be exampleproduction or exampledevelopment. If you are going to set up an HA partition group environment, the <cloning_domain>must be the same across all CloudHSMs. The <so_password> is the security officer password for the CloudHSM device, and for ease of remembrance, it should be the same across all devices as well. It is important you use passwords and cloning domain names that you will remember, because they are unrecoverable and the loss of them means loss of all data on a CloudHSM device. For your use, we do supply a Password Worksheet if you want to write down your passwords and store the printed page in a secure place.
Configure the client instance
Configuring the client instance is important because it is the secure link between you, your applications, and the CloudHSM devices. The client instance opens a secure channel to the CloudHSM devices and sends all requests over this channel so that the CloudHSM device can perform the cryptographic operations and key storage. Because you already have launched the client instance and mostly configured it, the only step left is to create the Network Trust Link (NTL) between the client instance and the CloudHSM. For this, we will use the LunaSA vtl commands again.
Copy the server certificate from the CloudHSM to the client.
The <client_id> and <client_name> should be the same for ease of use, and this should be the same as the name you used when you created your client certificate.
On the CloudHSM, log in with the SO password.
lunash:> hsm login
Create a partition on each CloudHSM (use the same name for ease of remembrance).
Verify that the partition assigning went correctly.
lunash:> client show –client <client_id>
Log in to the client and verify it has been properly configured.
$ vtl verify
The following Luna SA Slots/Partitions were found:
Slot Serial # Label
==== ========= ============
1 <serial_num1><partition_name>
2 <serial_num2><partition_name>
You should see an entry for each partition created on each CloudHSM device. This step lets you know that the CloudHSM devices and client instance were properly configured.
The partitions created and assigned via the previous steps are for testing purposes only and will not be used in the HA parition group setup. The HA partition group workflow will automatically create a partition on each CloudHSM device for its purposes. At this point, you have created the client and at least two CloudHSM devices. You also have set up and tested for validity the connection between the client instance and the CloudHSM devices. The next step in to ensure fault tolerance by setting up the HA partition group.
Set up the HA partition group for fault tolerance
Now that you have provisioned multiple CloudHSM devices in your account, you will add them to an HA partition group. As I explained earlier in this post, an HA partition group is a virtual partition that represents a group of partitions distributed over many physical CloudHSM devices for HA. Automatic recovery is also a key factor in ensuring HA and data integrity across your HA partition group members. If you followed the previous procedures in this post, setting up the HA partition group should be relatively straightforward.
Create the HA partition group
First, you will create the actual HA partition group itself. Using the CloudHSM CLI on your client instance, run the following command to create the HA partition group and name it per your company’s naming conventions. In the following command, replace <label> with the name you chose.
$ CloudHSM create-hapg –group-label <label>
Register the CloudHSM devices with the HA partition group
Now, add the already initialized CloudHSM devices to the HA partition group. You will need to run the following command for each CloudHSM device you want to add to the HA partition group.
You should see output similar to the following after each successful addition to the HA partition group.
{
“Status”: “Addition of CloudHSM <CloudHSM_arn> to HA partition group <hapg_arn> successful”
}
Register the client with the HA partition group
The last step is to register the client with the HA partition group. You will need the client ARN from earlier in the post, and you will use the CloudHSM CLI command register-client-to-hapg to complete this process.
$ CloudHSM register-client-to-hapg \
--client-arn <client_arn> \
--hapg-arn <hapg_arn>
{
“Status”: “Registration of the client <client_arn> to the HA partition group <hapg_arn> successful”
}
After you register the client with the HA partition group, you have the client configuration file and the server certificates. You have already registered the client with the HA partition group, but now you have to actually assign it as well, which you do by using the get-client-configuration AWS CLI command.
$ CloudHSM get-client-configuration \
--client-arn <client_arn> \
--hapg-arn <hapg_arn> \
--cert-directory <server_cert_location> \
--config-directory /etc/
The configuration file has been copied to /etc/
The server certificate has been copied to <server_cert_location>
The <server_cert_location> will differ depending on the LunaSA client software you are using:
Client software version 5.3:/usr/safenet/lunaclient/cert/server
Client software version 5.1:/usr/lunasa/cert/server
Lastly, to verify the client configuration, run the following LunaSA vtl command.
$ vtl haAdmin show
In the output, you will see a heading, HA Group and Member Information. Ensure that the number of group members equals the number of CloudHSM devices you added to the HA partition group. If the number does not match what you have provisioned, you might have missed a step in the provisioning process. Going back through the provisioning process usually repairs this. However, if you still encounter issues, opening a support case is the quickest way to get assistance.
Another way to verify the HA partition group setup is to check the /etc/Chrystoki.conf file for output similar to the following.
You have now completed the process of provisioning CloudHSM devices, the client instance for connection, and your HA partition group for fault tolerance. You can begin using an application of your choice to access the CloudHSM devices for key management and encryption. By accessing CloudHSM devices via the HA partition group, you ensure that all traffic is load balanced between all backing CloudHSM devices. The HA partition group will ensure that each CloudHSM has identical information so that it can respond to any request issued.
Now that you have an HA partition group set up with automatic recovery, if a CloudHSM device fails, the device will attempt to recover itself, and all traffic will be rerouted to the remaining CloudHSM devices in the HA partition group so as not to interrupt traffic. After recovery (manual or automatic), all data will be replicated across the CloudHSM devices in the HA partition group to ensure consistency.
If you have questions about any part of this blog post, please post them on the IAM forum.
Here’s a really neat solution from the inestimable Dan “PiGlove, mind where you put the capitals” Aldred. If you’re not able to get to another screen or monitor, or if you’re on the move, this is a very tidy way to get set up.
A comprehensive video covering how to set up your Raspberry Pi Zero so that you can access it via the USB port. Yes, plug it in to a USB port and you can use the command line or with a few tweaks a full graphical desktop.
This is a really comprehensive guide, taking you all the way from flashing an SD card, accessing your Pi Zero via Putty, installing VNC and setting up a graphical user interface, to running Minecraft. Dan’s a teacher, and this video is perfect for beginners; if you find it helpful, please let us know in the comments!
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.