On January 6, 2020, the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) released an alert (AA20-006A) that highlighted measures for critical infrastructure to prepare for information security risks, but which are also relevant to all organizations. The CISA alert focuses on vulnerability mitigation and incident preparation.
At AWS, security is our core function and highest priority and, as always, we are engaged with the U.S. Government and other responsible national authorities regarding the current threat landscape. We are taking all appropriate steps to ensure that our customers and infrastructure remain protected, and we encourage our customers to do the same with their systems and workloads, whether in the cloud or on-premises.
The CISA recommendations reflect general guidance, as well as specific mitigations and monitoring that can help address information security risks. In this post, we provide customers with resources they can use to apply the CISA recommendations to their environment and implement other best practices to protect their resources. Specifically, the security principles and mechanisms provided in the Well Architected Framework and posts on AWS best practices that can help you address the issues described in the alert.
The specific techniques described in the CISA alert are almost all related to issues that exist in an on-premises Windows or Linux operating system and network environment, and are not directly related to cloud computing. However, the precautions described may be applicable to the extent customers are using those operating systems in an Amazon Elastic Compute Cloud (Amazon EC2) virtual machine environment. There are also cloud-specific technologies and issues that should be considered and addressed. Customers can use the information provided in the table below to help address the issues.
We’re also including links to GitHub repositories that can be helpful to automate some of the above practices, and the AWS Security Incident Response white paper, to assist with planning and response to security events. We strongly recommend that you review your run-books, disaster recovery plans, and backup procedures.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this blog post, please contact your AWS Account Manager or contact AWS Support. If you need urgent help or have relevant information about an existing security issue, contact your AWS account representative.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
The DHS is requiring all federal agencies to develop a vulnerability disclosure policy. The goal is that people who discover vulnerabilities in government systems have a mechanism for reporting them to someone who might actually do something about it.
The devil is in the details, of course, but this is a welcome development.
Back in October, Bloomberg reportedthat China has managed to install backdoors into server equipment that ended up in networks belonging to — among others — Apple and Amazon. Pretty much everybody has denied it (including the US DHS and the UK NCSC). Bloomberg has stood by its story — and is still standing by it.
I don’t think it’s real. Yes, it’s plausible. But first of all, if someone actually surreptitiously put malicious chips onto motherboards en masse, we would have seen a photo of the alleged chip already. And second, there are easier, more effective, and less obvious ways of adding backdoors to networking equipment.
If you store sensitive or confidential data in Amazon DynamoDB, you might want to encrypt that data as close as possible to its origin so your data is protected throughout its lifecycle.
You can use the DynamoDB Encryption Client to protect your table data before you send it to DynamoDB. Encrypting your sensitive data in transit and at rest helps assure that your plaintext data isn’t available to any third party, including AWS.
You don’t need to be a cryptography expert to use the DynamoDB Encryption Client. The encryption and signing elements are designed to work with your existing DynamoDB applications. After you create and configure the required components, the DynamoDB Encryption Client transparently encrypts and signs your table items when you call PutItem and verifies and decrypts them when you call GetItem.
You can create your own custom components, or use the basic implementations that are included in the library. We’ve made sure that the classes that we provide implement strong and secure cryptography.
You can use the DynamoDB Encryption Client with AWS Key Management Service (AWS KMS) or AWS CloudHSM, but the library doesn’t require AWS or any AWS service.
The DynamoDB Encryption Client is now available in Python, as well as Java. All supported language implementations are interoperable. For example, you can encrypt table data with the Python library and decrypt it with the Java library.
The DynamoDB Encryption Client is an open-source project. We hope that you will join us in developing the libraries and writing great documentation.
How it works
The DynamoDB Encryption Client processes one table item at a time. First, it encrypts the values (but not the names) of attributes that you specify. Then, it calculates a signature over the attributes that you specify, so you can detect unauthorized changes to the item as a whole, including adding or deleting attributes, or substituting one encrypted value for another.
However, attribute names, and the names and values in the primary key (the partition key and sort key, if one is provided) must remain in plaintext to make the item discoverable. They’re included in the signature by default.
Important: Do not put any sensitive data in the table name, attribute names, the names and values of the primary key attributes, or any attribute values that you tell the client not to encrypt.
How to use it
I’ll demonstrate how to use the DynamoDB Encryption Client in Python with a simple example. I’ll encrypt and sign one table item, and then add it to an existing table. This example uses a test item with arbitrary data, but you can use a similar procedure to protect a table item that contains highly sensitive data, such as a customer’s personal information.
I’ll start by creating a DynamoDB table resource that represents an existing table. If you use the code, be sure to supply a valid table name.
# Create a DynamoDB table
table = boto3.resource('dynamodb').Table(table_name)
Step 2: Create a cryptographic materials provider
Next, create an instance of a cryptographic materials provider (CMP). The CMP is the component that gathers the encryption and signing keys that are used to encrypt and sign your table items. The CMP also determines the encryption algorithms that are used and whether you create unique keys for every item or reuse them.
The DynamoDB Encryption Client includes several CMPs and you can create your own. And, if you’re in doubt, we help you to choose a CMP that fits your application and its security requirements.
In this example, I’ll use the Direct KMS Provider, which gets its cryptographic material from the AWS Key Management Service (AWS KMS). The encryption and signing keys that you use are protected by a customer master key in your AWS account that never leaves AWS KMS unencrypted.
To create a Direct KMS Provider, you specify an AWS KMS customer master key. Be sure to replace the fictitious customer master key ID (the value of aws-cmk-id) in this example with a valid one.
# Create a Direct KMS provider. Pass in a valid KMS customer master key.
aws_cmk_id = '1234abcd-12ab-34cd-56ef-1234567890ab'
aws_kms_cmp = AwsKmsCryptographicMaterialsProvider(key_id=aws_cmk_id)
Step 3: Create an attribute actions object
An attribute actions object tells the DynamoDB Encryption Client which item attribute values to encrypt and which attributes to include in the signature. The options are: ENCRYPT_AND_SIGN, SIGN_ONLY, and DO_NOTHING.
This sample attribute action encrypts and signs all attributes values except for the value of the test attribute; that attribute is neither encrypted nor included in the signature.
# Tell the encrypted table to encrypt and sign all attributes except one.
actions = AttributeActions(
default_action=CryptoAction.ENCRYPT_AND_SIGN,
attribute_actions={
'test': CryptoAction.DO_NOTHING
}
)
If you’re using a helper class, such as the EncryptedTable class that I use in the next step, you can’t specify an attribute action for the primary key. The helper classes make sure that the primary key is signed, but never encrypted (SIGN_ONLY).
Step 4: Create an encrypted table
Now I can use the original table object, along with the materials provider and attribute actions, to create an encrypted table.
# Use these objects to create an encrypted table resource.
encrypted_table = EncryptedTable(
table=table,
materials_provider=aws_kms_cmp,
attribute_actions=actions
)
In this example, I’m using the EncryptedTable helper class, which adds encryption features to the DynamoDB Table class in the AWS SDK for Python (Boto 3). The DynamoDB Encryption Client in Python also includes EncryptedClient and EncryptedResource helper classes.
The DynamoDB Encryption Client helper classes call the DescribeTable operation to find the primary key. The application that runs the code must have permission to call the operation.
We’re done configuring the client. Now, we can encrypt, sign, verify, and decrypt table items.
When we call the PutItem operation, the item is transparently encrypted and signed, except for the primary key, which is signed, but not encrypted, and the test attribute, which is ignored.
encrypted_table.put_item(Item=plaintext_item)
And, when we call the GetItem operation, the item is transparently verified and decrypted.
To view the encrypted item, call the GetItem operation on the original table object, instead of the encrypted_table object. It gets the item from the DynamoDB table without verifying and decrypting it.
Here’s an excerpt of the output that displays the encrypted item:
Figure 1: Output that displays the encrypted item
Client-side or server-side encryption?
The DynamoDB Encryption Client is designed for client-side encryption, where you encrypt your data before you send it to DynamoDB.
But, you have other options. DynamoDB supports encryption at rest, a server-side encryption option that transparently encrypts the data in your table whenever DynamoDB saves the table to disk. You can even use both the DynamoDB Encryption Client and encryption at rest together. The encrypted and signed items that the client generates are standard table items that have binary data in their attribute values. Your choice depends on the sensitivity of your data and the security requirements of your application.
Although the Java and Python versions of the DynamoDB Encryption Client are fully compatible, the DynamoDB Encryption Client isn’t compatible with other client-side encryption libraries, such as the AWS Encryption SDK or the S3 Encryption Client. You can’t encrypt data with one library and decrypt it with another. For data that you store in DynamoDB, we recommend the DynamoDB Encryption Client.
Encryption is crucial
Using tools like the DynamoDB Encryption Client helps you to protect your table data and comply with the security requirements for your application. We hope that you use the client and join us in developing it on GitHub.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Key Management Service forum or contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
AWS CloudHSM provides fully managed, single-tenant hardware security modules (HSMs) in the AWS cloud. A CloudHSM cluster contains either one or multiple HSMs. Multiple HSMs support higher throughput levels for cryptographic operations and provide redundancy. For clusters with multiple HSMs, the CloudHSM service supports server-side automated synchronization of keys and policies. Users, however, are synchronized from the client-side and the synchronization is driven by configuration files which must be refreshed when the cluster size changes. If you do not refresh the configuration files, your CloudHSM user configurations could become unsynchronized and affect the ability of your CloudHSM cluster to provide consistent support of cryptographic information.
In this blog post, I’ll provide a general overview of a CloudHSM architecture, discuss the cluster synchronization process, build a CloudHSM environment, show how the cluster users can become unsynchronized, and then restore user synchronization to bring your cluster back to a consistent state to meet your needs for consistency and redundancy.
CloudHSM Architectural Overview
When you provision an HSM instance in CloudHSM, the HSM instance provides an elastic network interface (ENI) in yourAmazon VPC while the HSM itself resides in a separate VPC managed by AWS CloudHSM. Your applications use the CloudHSM cluster ID to add or remove HSMs from the cluster and the ENI(s) of the HSM instance(s) to access the HSM instances.
You configure your cluster and its HSM instances using CloudHSM client software you deploy on Amazon EC2 instances in your VPC. You only need one such EC2 instance to manage a CloudHSM cluster, but it’s common to deploy additional EC2 instances in other availability zones to provide for client redundancy. Your applications communicate with the HSM instances using the client daemon. You manage and configure the cluster with command line tools including cloudhsm_mgmt_util, key_mgmt_util, and configure. An example of a CloudHSM architecture appears below.
Figure 1: A 3-Node CloudHSM architecture
The diagram shows a three-node CloudHSM cluster deployed in the us-west-2 (Oregon) region with three Amazon EC2 instances with the CloudHSM software. The client in Availability Zone 2 is communicating with the cluster through the elastic network interfaces in each availability zone.
CloudHSM Synchronization Process
Having discussed the architecture of AWS CloudHSM, let’s turn our attention to the matter of cluster synchronization. There are three events that require synchronization: cluster expansion, key management operations, and user management operations. Let’s look at each of these in more detail.
Cluster Expansion
When you add an HSM to an existing cluster, AWS CloudHSM clones all users, keys, and policies from another HSM in the cluster. No additional steps are required on your part.
Key Management Operations
Key management with the key_mgmt_util tool uses the CloudHSM client to communicate with the HSM cluster. Additionally, a fallback, HSM-based synchronization protocol keeps keys in sync.
User Management
You perform user management tasks, such as adding users or changing passwords, using the cloudhsm_mgmt_util tool. This tool communicates directly with the HSMs, bypassing the client daemon. cloudhsm_mgmt_util uses its own configuration files to determine the HSMs that it should connect to within the cluster. These configuration files aren’t updated dynamically when HSM instances are added. To prevent user synchronization errors, you must update the configuration files before running cloudhsm_mgmt_util. You must also not add new HSM instances to the cluster while you’re using the tool. This helps ensure that no HSM instances are accidentally left out of user updates that would in turn result in user synchronization problems.
Again, these safeguards are only necessary when using cloudhsm_mgmt_util. For all other applications and utilities using CloudHSM, the client daemon automatically reconfigures itself as you add and remove HSM instances from your cluster. In the remainder of this post, I will build a CloudHSM infrastructure as shown in the above diagram. I’ll then show you how users on your CloudHSM instances can become unsynchronized, and how to restore proper synchronization.
Prerequisites and Assumptions
You’ll need to have an AWS account that allows you to provision Amazon VPCs, Amazon EC2 instances, and CloudHSMs.
I’ll use the us-west-2 (Oregon) region, but you can use any region that offers CloudHSM.
You’ll need an Amazon EC2 key pair in the region.
You should have a working knowledge of the services I’ve mentioned.
Important: You’ll incur charges for the resources used in this example. You can find the cost of each service on that service’s pricing page.
Building a CloudHSM Infrastructure
Create an Amazon VPC with subnets in the us-west-2a, us-west-2b, and us-east-2c availability zones. I’ll use the Amazon VPC Architecture Quick Start, which is an AWS CloudFormation template that will do this on your behalf. Make sure you select the correct region after you load the Quick Start. Select the following parameters:
Parameter
Value
Availability Zones
us-west-2a, us-west-2b, us-west-2c
Number of Availability Zones
3
Create private subnets
False
Create additional private subnets with dedicated network ACLs
False
Key pair name
The name of your Amazon EC2 key pair
Accept the default values for all other parameters.
Follow these instructions to create a CloudHSM cluster in your new VPC in the us-west-2a, us-west-2b and us-west-2c availability zones. Note that the cluster will not have any HSMs after it’s created.
Follow these instructions to initialize the cluster with an HSM in the us-west-2a availability zone. After the cluster is initialized, note the ENI IP address from the cluster details section in the console as shown here:
Install the client software on the EC2 instance you launched in step 4.
Add the IP of the EC2 instance that you identified in step 4 to the security group you identified in step 3.
Activate the cluster. The activation instructions will guide you through connecting to the EC2 instance you launched in step 4. Remain logged into the EC2 instance following the activation of the cluster for the steps below.
While you are still logged into the EC2 instance you just launched, follow the steps below to add a crypto user named example_user to the cluster:
Ensure the CloudHSM daemon is stopped:$ sudo stop cloudhsm-client
Configure the IP address of the initial HSM using the ENI IP address from step 3:$ sudo /opt/cloudhsm/bin/configure –a 10.0.129.209
Note: the configure tool updates two configuration files: one for the CloudHSM client, and the other for the cloudhsm_mgmt_util program that is used to administer users.
Start the CloudHSM client:$ sudo start cloudhsm-client
Ensure the cloudhsm_mgmt_util configuration file is up to date. We need to do this to ensure cloudhsm_mgmt_util is aware of all the HSM instances in the cluster:$ sudo /opt/cloudhsm/bin/configure –m
Connect to the HSM instances, enable end-to-end encryption, and log in to the HSM instances. Enabling end-to-end encryption encrypts the communication between cloudhsm_mgmt_util and the HSM to prevent interception of sensitive information such as passwords:$ /opt/cloudhsm/bin/cloudhsm_mgmt_util /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg
aws-cloudhsm> enable_e2e
aws-cloudhsm> loginHSM CO admin
Figure 4: Connecting to a Single CloudHSM
Note: The connection or log in is automatically executed on every HSM instance that cloudhsm_mgmt_util is aware of. Note also that for each of the commands that you enter, the cloudhsm_mgmt_util program identifies the IP address of the HSM to which it is communicating.
Add the user example_user and then confirm the addition by listing the users in the HSM:aws-cloudhsm> createUser CU example_user yourpassword
aws-cloudhsm> listUsers
Use the quit command to log out and exit the program:aws-cloudhsm> quit
Now that we’ve added a user to the CloudHSM, let’s add a key so we can see how users and keys are synchronized as the cluster changes.
Start the key_mgmt_util program:$ /opt/cloudhsm/bin/key_mgmt_util
Log in to the HSM:Command: loginHSM –u CU –s example_user
Notice that key_mgmt_util displays the node id to which it is communicating.
Use the exit command to leave the program:exit
Add another HSM to the cluster in the us-west-2b availability zone and note the ENI IP address from the cluster details section in the console, as shown here:
Figure 6: The ENI IP address
Update the cluster configuration files and use cloud_mgmt_util to examine the user configuration: $ sudo stop cloudhsm-client$ sudo /opt/cloudhsm/bin/configure –a 10.0.129.209
Figure 7: Connecting to the 2-node CloudHSM cluster
Note that cloudhsm_mgmt_utilcloudhsm_mgmt_util now sends commands to both of the HSMs in the cluster. You can see the same thing when we list the users in the cluster.
Figure 8: Showing proper user synchronization across two CloudHSMs
Now, use key_mgmt_util to examine the keys:Command: findKey
Figure 9: Showing that keys are properly synchronized across a 2-node CloudHSM cluster
This command confirms that when we added the second HSM, CloudHSM used cluster-initiated synchronization to load the users and keys into the new HSM.
The CloudHSM Cluster Users Become Unsynchronized
Start cloudhsm_mgmt_util and enable end-to-end encryption:$ /opt/cloudhsm/bin/cloudhsm_mgmt_util /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg
aws-cloudhsm> enable_e2e
Figure 10: Connecting to the 2-node CloudHSM cluster
While cloudhsm_mgmt_util is left running, add a third HSM in us-west-2c through the console and note the ENI IP address, as shown here:
Figure 11: Connecting to the 2-node CloudHSM cluster
Going back to cloudhsm_mgmt_util, let’s add a user named newest_user to our cluster. Note that we have not exited cloudhsm_mgmt_util and refreshed its configuration file. So it’s still connected only to the first two HSM instances.aws-cloudhsm> enable_e2e
aws-cloudhsm> loginHSM CO admin yourpassword
aws-cloudhsm> createUser CU newest_user yourpassword
Figure 12: Adding a User to only two nodes of a 3-node CloudHSM Cluster and breaking synchronization
The cloudhsm_mgmt_util command adds the user to the two HSMs it already knows about and had connected to. It doesn’t communicate with the newly added HSM.
Let’s fix this by exiting cloudhsm_mgmt_util. Refresh the configuration, and then run the management utility again.$sudo stop cloudhsm-client
You can now see cloudhsm_mgmt_util is communicating with all of the cluster nodes.
Figure 13: Connecting to a 3-node CloudHSM cluster
Let’s see what happens when we list the users:aws-cloudhsm> listUsers
Figure 14: Showing that users are now unsynchronized
You can see from the results that one of the HSMs (server 1) is missing the user named newest_user. The reason this happened is that cloudhsm_mgmt_util was unaware of the HSM instance that was added while it was running (recall that cloudhsm_mgmt_util doesn’t use the cloudhsm_client daemon and, therefore, doesn’t get automatic cluster configuration updates).
Restoring User Synchronization to the CloudHSM Cluster
We now want to add the user newest_user to the single HSM (server 1) that is out of sync. Normally, cloudhsm_mgmt_util works in cluster mode and applies your commands to all HSMs in the cluster. Since we want to work on a single HSM, we’re going to enter the server command to tell cloudhsm_mgmt_util to work in server mode and apply our commands just to that one HSM.
In the server command below, we specify the number of the HSM that we want to change based on the figure above. In the createUser command, you must use the same password that you used in step 3 (in the section titled “The CloudHSM Cluster Users Become Unsynchronized”) on the other HSMs in the cluster so that all HSMs in the cluster have identical user names and passwords. After we make this change, we use the exit command to transition from server mode back to cluster mode.aws-cloudhsm> server 1
server1> createUser CU newest_user yourpassword
exit
Figure 15: Adding a user to a single-node of a 3-node CloudHSM cluster
Now that we have transitioned back to cluster mode, let’s confirm that the HSM user tables are now synchronized by listing the users:aws-cloudhsm> listUsers
Figure 16: Showing that users are now synchronized across the 3-node CloudHSM cluster
Let’s take a look at the keys using key_mgmt_util:Command: loginHSM –u CU –s example_user –p yourpassword
Command: findKey
Figure 17: Showing that keys continued to be synchronized across a 3-node CloudHSM Cluster
You can see that CloudHSM kept the keys in sync because key synchronization is cluster-initiated. No additional actions are required on our part.
Conclusion
AWS CloudHSM provides the ability to create scalable clusters of HSM instances to support the high volumes of cryptographic operations and provide resiliency by supporting multiple availability zones. As mentioned, it’s important to be aware of the various modes of synchronization used in CloudHSM so that each HSM can provide consistent service. In particular, users are synchronized only by the client. Since cloudhsm_mgmt_util doesn’t rely on the client daemon to talk to HSM instances in your cluster, it doesn’t automatically update its configuration. By following the steps above and refreshing the configuration information before changing users or passwords, CloudHSM will keep users and passwords synchronized within the cluster and provide consistent responses to cryptographic operations if the level of redundancy within the HSM cluster changes.
If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon CloudHSM forum or contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
Today, our customers use AWS CloudHSM to meet corporate, contractual and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) instances within the AWS cloud. CloudHSM delivers all the benefits of traditional HSMs including secure generation, storage, and management of cryptographic keys used for data encryption that are controlled and accessible only by you.
As a managed service, it automates time-consuming administrative tasks such as hardware provisioning, software patching, high availability, backups and scaling for your sensitive and regulated workloads in a cost-effective manner. Backup and restore functionality is the core building block enabling scalability, reliability and high availability in CloudHSM.
You should consider using AWS CloudHSM if you require:
Keys stored in dedicated, third-party validated hardware security modules under your exclusive control
FIPS 140-2 compliance
Integration with applications using PKCS#11, Java JCE, or Microsoft CNG interfaces
Healthcare applications subject to HIPAA regulations
Streaming video solutions subject to contractual DRM requirements
We recently released a whitepaper, “Security of CloudHSM Backups” that provides in-depth information on how backups are protected in all three phases of the CloudHSM backup lifecycle process: Creation, Archive, and Restore.
About the Author
Balaji Iyer is a senior consultant in the Professional Services team at Amazon Web Services. In this role, he has helped several customers successfully navigate their journey to AWS. His specialties include architecting and implementing highly-scalable distributed systems, operational security, large scale migrations, and leading strategic AWS initiatives.
Forbesreports that the Israeli company Cellebrite can probably unlock all iPhone models:
Cellebrite, a Petah Tikva, Israel-based vendor that’s become the U.S. government’s company of choice when it comes to unlocking mobile devices, is this month telling customers its engineers currently have the ability to get around the security of devices running iOS 11. That includes the iPhone X, a model that Forbes has learned was successfully raided for data by the Department for Homeland Security back in November 2017, most likely with Cellebrite technology.
[…]
It also appears the feds have already tried out Cellebrite tech on the most recent Apple handset, the iPhone X. That’s according to a warrant unearthed by Forbes in Michigan, marking the first known government inspection of the bleeding edge smartphone in a criminal investigation. The warrant detailed a probe into Abdulmajid Saidi, a suspect in an arms trafficking case, whose iPhone X was taken from him as he was about to leave America for Beirut, Lebanon, on November 20. The device was sent to a Cellebrite specialist at the DHS Homeland Security Investigations Grand Rapids labs and the data extracted on December 5.
This story is based on some excellent reporting, but leaves a lot of questions unanswered. We don’t know exactly what was extracted from any of the phones. Was it metadata or data, and what kind of metadata or data was it.
The story I hear is that Cellebrite hires ex-Apple engineers and moves them to countries where Apple can’t prosecute them under the DMCA or its equivalents. There’s also a credible rumor that Cellebrite’s mechanisms only defeat the mechanism that limits the number of password attempts. It does not allow engineers to move the encrypted data off the phone and run an offline password cracker. If this is true, then strong passwords are still secure.
EDITED TO ADD (3/1): Another article, with more information. It looks like there’s an arms race going on between Apple and Cellebrite. At least, if Cellebrite is telling the truth — which they may or may not be.
The cybersecurity company McAfee recently uncovered a cyber operation, dubbed Operation GoldDragon, attacking South Korean organizations related to the Winter Olympics. McAfee believes the attack came from a nation state that speaks Korean, although it has no definitive proof that this is a North Korean operation. The victim organizations include ice hockey teams, ski suppliers, ski resorts, tourist organizations in Pyeongchang, and departments organizing the Pyeongchang Olympics.
Meanwhile, a Russia-linked cyber attack has already stolen and leaked documents from other Olympic organizations. The so-called Fancy Bear group, or APT28, began its operations in late 2017 – according toTrend Micro and Threat Connect, two private cybersecurity firms — eventually publishingdocuments in 2018 outlining the political tensions between IOC officials and World Anti-Doping Agency (WADA) officials who are policing Olympic athletes. It also released documents specifying exceptions to anti-doping regulations granted to specific athletes (for instance, one athlete was given an exception because of his asthma medication). The most recent Fancy Bear leak exposed details about a Canadian pole vaulter’s positive results for cocaine. This group has targeted WADA in the past, specifically during the 2016 Rio de Janeiro Olympics. Assuming the attribution is right, the action appears to be Russian retaliation for the punitive steps against Russia.
A senior analyst at McAfee warned that the Olympics may experience more cyber attacks before closing ceremonies. A researcher at ThreatConnect asserted that organizations like Fancy Bear have no reason to stop operations just because they’ve already stolen and released documents. Even the United States Department of Homeland Security has issued a notice to those traveling to South Korea to remind them to protect themselves against cyber risks.
One presumes the Olympics network is sufficiently protected against the more pedestrian DDoS attacks and the like, but who knows?
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content to end users through a worldwide network of edge locations. CloudFront provides a number of benefits and capabilities that can help you secure your applications and content while meeting compliance requirements. For example, you can configure CloudFront to help enforce secure, end-to-end connections using HTTPS SSL/TLS encryption. You also can take advantage of CloudFront integration with AWS Shield for DDoS protection and with AWS WAF (a web application firewall) for protection against application-layer attacks, such as SQL injection and cross-site scripting.
Now, CloudFront field-level encryption helps secure sensitive data such as a customer phone numbers by adding another security layer to CloudFront HTTPS. Using this functionality, you can help ensure that sensitive information in a POST request is encrypted at CloudFront edge locations. This information remains encrypted as it flows to and beyond your origin servers that terminate HTTPS connections with CloudFront and throughout the application environment. In this blog post, we demonstrate how you can enhance the security of sensitive data by using CloudFront field-level encryption.
Note: This post assumes that you understand concepts and services such as content delivery networks, HTTP forms, public-key cryptography, CloudFront, AWS Lambda, and the AWS CLI. If necessary, you should familiarize yourself with these concepts and review the solution overview in the next section before proceeding with the deployment of this post’s solution.
How field-level encryption works
Many web applications collect and store data from users as those users interact with the applications. For example, a travel-booking website may ask for your passport number and less sensitive data such as your food preferences. This data is transmitted to web servers and also might travel among a number of services to perform tasks. However, this also means that your sensitive information may need to be accessed by only a small subset of these services (most other services do not need to access your data).
User data is often stored in a database for retrieval at a later time. One approach to protecting stored sensitive data is to configure and code each service to protect that sensitive data. For example, you can develop safeguards in logging functionality to ensure sensitive data is masked or removed. However, this can add complexity to your code base and limit performance.
Field-level encryption addresses this problem by ensuring sensitive data is encrypted at CloudFront edge locations. Sensitive data fields in HTTPS form POSTs are automatically encrypted with a user-provided public RSA key. After the data is encrypted, other systems in your architecture see only ciphertext. If this ciphertext unintentionally becomes externally available, the data is cryptographically protected and only designated systems with access to the private RSA key can decrypt the sensitive data.
It is critical to secure private RSA key material to prevent unauthorized access to the protected data. Management of cryptographic key material is a larger topic that is out of scope for this blog post, but should be carefully considered when implementing encryption in your applications. For example, in this blog post we store private key material as a secure string in the Amazon EC2 Systems Manager Parameter Store. The Parameter Store provides a centralized location for managing your configuration data such as plaintext data (such as database strings) or secrets (such as passwords) that are encrypted using AWS Key Management Service (AWS KMS). You may have an existing key management system in place that you can use, or you can use AWS CloudHSM. CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys in the AWS Cloud.
To illustrate field-level encryption, let’s look at a simple form submission where Name and Phone values are sent to a web server using an HTTP POST. A typical form POST would contain data such as the following.
POST / HTTP/1.1
Host: example.com
Content-Type: application/x-www-form-urlencoded
Content-Length:60
Name=Jane+Doe&Phone=404-555-0150
Instead of taking this typical approach, field-level encryption converts this data similar to the following.
POST / HTTP/1.1
Host: example.com
Content-Type: application/x-www-form-urlencoded
Content-Length: 1713
Name=Jane+Doe&Phone=AYABeHxZ0ZqWyysqxrB5pEBSYw4AAA...
To further demonstrate field-level encryption in action, this blog post includes a sample serverless application that you can deploy by using a CloudFormation template, which creates an application environment using CloudFront, Amazon API Gateway, and Lambda. The sample application is only intended to demonstrate field-level encryption functionality and is not intended for production use. The following diagram depicts the architecture and data flow of this sample application.
Sample application architecture and data flow
Here is how the sample solution works:
An application user submits an HTML form page with sensitive data, generating an HTTPS POST to CloudFront.
Field-level encryption intercepts the form POST and encrypts sensitive data with the public RSA key and replaces fields in the form post with encrypted ciphertext. The form POST ciphertext is then sent to origin servers.
The serverless application accepts the form post data containing ciphertext where sensitive data would normally be. If a malicious user were able to compromise your application and gain access to your data, such as the contents of a form, that user would see encrypted data.
Lambda stores data in a DynamoDB table, leaving sensitive data to remain safely encrypted at rest.
An administrator uses the AWS Management Console and a Lambda function to view the sensitive data.
During the session, the administrator retrieves ciphertext from the DynamoDB table.
Decrypted sensitive data is transmitted over SSL/TLS via the AWS Management Console to the administrator for review.
Deployment walkthrough
The high-level steps to deploy this solution are as follows:
Stage the required artifacts When deployment packages are used with Lambda, the zipped artifacts have to be placed in an S3 bucket in the target AWS Region for deployment. This step is not required if you are deploying in the US East (N. Virginia) Region because the package has already been staged there.
Generate an RSA key pair Create a public/private key pair that will be used to perform the encrypt/decrypt functionality.
Upload the public key to CloudFront and associate it with the field-level encryption configuration After you create the key pair, the public key is uploaded to CloudFront so that it can be used by field-level encryption.
Launch the CloudFormation stack Deploy the sample application for demonstrating field-level encryption by using AWS CloudFormation.
Add the field-level encryption configuration to the CloudFront distribution After you have provisioned the application, this step associates the field-level encryption configuration with the CloudFront distribution.
Store the RSA private key in the Parameter Store Store the private key in the Parameter Store as a SecureString data type, which uses AWS KMS to encrypt the parameter value.
Deploy the solution
1. Stage the required artifacts
(If you are deploying in the US East [N. Virginia] Region, skip to Step 2, “Generate an RSA key pair.”)
Stage the Lambda function deployment package in an Amazon S3 bucket located in the AWS Region you are using for this solution. To do this, download the zipped deployment package and upload it to your in-region bucket. For additional information about uploading objects to S3, see Uploading Object into Amazon S3.
2. Generate an RSA key pair
In this section, you will generate an RSA key pair by using OpenSSL:
Confirm access to OpenSSL.
$ openssl version
You should see version information similar to the following.
OpenSSL <version> <date>
Create a private key using the following command.
$ openssl genrsa -out private_key.pem 2048
The command results should look similar to the following.
Restrict access to the private key.$ chmod 600 private_key.pem Note: You will use the public and private key material in Steps 3 and 6 to configure the sample application.
3. Upload the public key to CloudFront and associate it with the field-level encryption configuration
Now that you have created the RSA key pair, you will use the AWS Management Console to upload the public key to CloudFront for use by field-level encryption. Complete the following steps to upload and configure the public key.
Note: Do not include spaces or special characters when providing the configuration values in this section.
From the AWS Management Console, choose Services > CloudFront.
In the navigation pane, choose Public Key and choose Add Public Key.
Complete the Add Public Key configuration boxes:
Key Name: Type a name such as DemoPublicKey.
Encoded Key: Paste the contents of the public_key.pem file you created in Step 2c. Copy and paste the encoded key value for your public key, including the -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY----- lines.
Comment: Optionally add a comment.
Choose Create.
After adding at least one public key to CloudFront, the next step is to create a profile to tell CloudFront which fields of input you want to be encrypted. While still on the CloudFront console, choose Field-level encryption in the navigation pane.
Under Profiles, choose Create profile.
Complete the Create profile configuration boxes:
Name: Type a name such as FLEDemo.
Comment: Optionally add a comment.
Public key: Select the public key you configured in Step 4.b.
Provider name: Type a provider name such as FLEDemo. This information will be used when the form data is encrypted, and must be provided to applications that need to decrypt the data, along with the appropriate private key.
Pattern to match: Type phone. This configures field-level encryption to match based on the phone.
Choose Save profile.
Configurations include options for whether to block or forward a query to your origin in scenarios where CloudFront can’t encrypt the data. Under Encryption Configurations, choose Create configuration.
Complete the Create configuration boxes:
Comment: Optionally add a comment.
Content type: Enter application/x-www-form-urlencoded. This is a common media type for encoding form data.
Default profile ID: Select the profile you added in Step 3e.
Choose Save configuration
4. Launch the CloudFormation stack
Launch the sample application by using a CloudFormation template that automates the provisioning process.
Input parameter
Input parameter description
ProviderID
Enter the Provider name you assigned in Step 3e. The ProviderID is used in field-level encryption configuration in CloudFront (letters and numbers only, no special characters)
PublicKeyName
Enter the Key Name you assigned in Step 3b. This name is assigned to the public key in field-level encryption configuration in CloudFront (letters and numbers only, no special characters).
PrivateKeySSMPath
Leave as the default: /cloudfront/field-encryption-sample/private-key
The path in the S3 bucket containing artifact files. Leave as default if deploying in us-east-1.
To finish creating the CloudFormation stack:
Choose Next on the Select Template page, enter the input parameters and choose Next. Note: The Artifacts configuration needs to be updated only if you are deploying outside of us-east-1 (US East [N. Virginia]). See Step 1 for artifact staging instructions.
On the Options page, accept the defaults and choose Next.
On the Review page, confirm the details, choose the I acknowledge that AWS CloudFormation might create IAM resources check box, and then choose Create. (The stack will be created in approximately 15 minutes.)
5. Add the field-level encryption configuration to the CloudFront distribution
While still on the CloudFront console, choose Distributions in the navigation pane, and then:
In the Outputs section of the FLE-Sample-App stack, look for CloudFrontDistribution and click the URL to open the CloudFront console.
Choose Behaviors, choose the Default (*) behavior, and then choose Edit.
For Field-level Encryption Config, choose the configuration you created in Step 3g.
Choose Yes, Edit.
While still in the CloudFront distribution configuration, choose the General Choose Edit, scroll down to Distribution State, and change it to Enabled.
Choose Yes, Edit.
6. Store the RSA private key in the Parameter Store
In this step, you store the private key in the EC2 Systems Manager Parameter Store as a SecureString data type, which uses AWS KMS to encrypt the parameter value. For more information about AWS KMS, see the AWS Key Management Service Developer Guide. You will need a working installation of the AWS CLI to complete this step.
Store the private key in the Parameter Store with the AWS CLI by running the following command. You will find the <KMSKeyID> in the KMSKeyID in the CloudFormation stack Outputs. Substitute it for the placeholderin the following command.
$ aws ssm put-parameter – type "SecureString" – name /cloudfront/field-encryption-sample/private-key --value file://private_key.pem – key-id "<KMSKeyID>"
------------------
| PutParameter |
+----------+-----+
| Version | 1 |
+----------+-----+
Verify the parameter. Your private key material should be accessible through the ssm get-parameter in the following command in the Value The key material has been truncated in the following output.
$ aws ssm get-parameter – name /cloudfront/field-encryption-sample/private-key – with-decryption
-----…
|| Value | – ---BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAwGRBGuhacmw+C73kM6Z…….
Notice we use the —with decryption argument in this command. This returns the private key as cleartext.
This completes the sample application deployment. Next, we show you how to see field-level encryption in action.
Delete the private key from local storage. On Linux for example, using the shred command, securely delete the private key material from your workstation as shown below. You may also wish to store the private key material within an AWS CloudHSM or other protected location suitable for your security requirements. For production implementations, you also should implement key rotation policies.
Use the following steps to test the sample application with field-level encryption:
Open sample application in your web browser by clicking the ApplicationURL link in the CloudFormation stack Outputs. (for example, https:d199xe5izz82ea.cloudfront.net/prod/). Note that it may take several minutes for the CloudFront distribution to reach the Deployed Status from the previous step, during which time you may not be able to access the sample application.
Fill out and submit the HTML form on the page:
Complete the three form fields: Full Name, Email Address, and Phone Number.
Choose Submit. Notice that the application response includes the form values. The phone number returns the following ciphertext encryption using your public key. This ciphertext has been stored in DynamoDB.
Execute the Lambda decryption function to download ciphertext from DynamoDB and decrypt the phone number using the private key:
In the CloudFormation stack Outputs, locate DecryptFunction and click the URL to open the Lambda console.
Configure a test event using the “Hello World” template.
Choose the Test button.
View the encrypted and decrypted phone number data.
Summary
In this blog post, we showed you how to use CloudFront field-level encryption to encrypt sensitive data at edge locations and help prevent access from unauthorized systems. The source code for this solution is available on GitHub. For additional information about field-level encryption, see the documentation.
If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the CloudFront forum.
Last month, the DHS announced that it was able to remotely hack a Boeing 757:
“We got the airplane on Sept. 19, 2016. Two days later, I was successful in accomplishing a remote, non-cooperative, penetration,” said Robert Hickey, aviation program manager within the Cyber Security Division of the DHS Science and Technology (S&T) Directorate.
“[Which] means I didn’t have anybody touching the airplane, I didn’t have an insider threat. I stood off using typical stuff that could get through security and we were able to establish a presence on the systems of the aircraft.” Hickey said the details of the hack and the work his team are doing are classified, but said they accessed the aircraft’s systems through radio frequency communications, adding that, based on the RF configuration of most aircraft, “you can come to grips pretty quickly where we went” on the aircraft.
Today, the following Security, Compliance, & Identity sessions, workshops, and chalk talks will be presented at AWS re:Invent 2017 in Las Vegas. All sessions are in the MGM Grand and all times are local. See the re:Invent Session Catalog for complete information about every session. You can also download the AWS re:Invent 2017 Mobile App for the latest updates and information.
If you are not attending re:Invent 2017, keep in mind that all videos of and slide decks from these sessions will be made available next week. We will publish a post on the Security Blog next week that links to all available videos and slide decks.
You can now encrypt and decrypt your data at the command line and in scripts—no cryptography or programming expertise is required. The new AWS Encryption SDK Command Line Interface (AWS Encryption CLI) brings the AWS Encryption SDK to the command line.
With the AWS Encryption CLI, you can take advantage of the advanced data protection built into the AWS Encryption SDK, including envelope encryption and strong algorithm suites, such as 256-bit AES-GCM with HKDF. The AWS Encryption CLI supports best-practice features, such as authenticated encryption with symmetric encryption keys and asymmetric signing keys, as well as unique data keys for each encryption operation. You can use the CLI with customer master keys (CMKs) from AWS Key Management Service (AWS KMS), master keys that you manage in AWS CloudHSM, or master keys from your own custom master key provider, but the AWS Encryption CLI does not require any AWS service.
The AWS Encryption CLI is built on the AWS Encryption SDK for Python and is fully interoperable with all language-specific implementations of the AWS Encryption SDK. It is supported on Linux, macOS, and Windows platforms. You can encrypt and decrypt your data in a shell on Linux and macOS, in a Command Prompt window (cmd.exe) on Windows, or in a PowerShell console on any system.
Let’s use the AWS Encryption CLI to encrypt a file called secret.txt in your current directory. I will write the file of encrypted output to the same directory. This secret.txt file contains a Hello World string, but it might contain data that is critical to your business.
$ ls
secret.txt
$ cat secret.txt
Hello World
I’m using a Linux shell, but you can run similar commands in a macOS shell, a Command Prompt window, or a PowerShell console.
When you encrypt data, you specify a master key. This example uses an AWS KMS CMK, but you can use a master key from any master key provider that is compatible with the AWS Encryption SDK. The AWS Encryption CLI uses the master key to generate a unique data key for each file that it encrypts.
If you use an AWS KMS CMK as your master key, you need to install and configure the AWS Command Line Interface (AWS CLI) so that the credentials you use to authenticate to AWS KMS are available to the AWS Encryption CLI. Those credentials must give you permission to call the AWS KMS GenerateDataKey and Decrypt APIs on the CMK.
The first line of this example saves an AWS KMS CMK ID in the $keyID variable. The second line encrypts the data in the secret.txt file. (The backslash, “\”, is the line continuation character in Linux shells.)
To run the following command, substitute a valid CMK identifier for the placeholder value in the command.
This command uses the --encrypt(-e) parameter to specify the encryption action and the --master-keys (-m) parameter with a key attribute to specify an AWS KMS CMK. If you’re not using an AWS KMS CMK, you need to include a provider attribute that identifies the master key provider.
The command uses the --encryption-context parameter (-c) to specify an encryption context, purpose=test, for the operation. The encryption context is non-secret data that is cryptographically bound to the encrypted data and included in plaintext in the encrypted message that the CLI returns. Providing additional authenticated data, such as an encryption context, is a recommended best practice.
The --metadata-output parameter tells the AWS Encryption CLI where to write the metadata for the encrypt command. The metadata includes the full paths to the input and output files, the encryption context, the algorithm suite, and other valuable information that you can use to review the operation and verify that it meets your security standards.
The --input (-i) and --output (-o) parameters are required in every AWS Encryption CLI command. In this example, the input file is the secret.txt file. The output location is the current directory, which is represented by a dot (“.”).
When the --encrypt command is successful, it creates a new file that contains the encrypted data, but it does not return any output. To see the results of the command, use a directory listing command, such as ls or dir. Running an ls command in this example shows that the AWS Encryption CLI generated the secret.txt.encrypted file.
$ ls
secret.txt secret.txt.encrypted
By default, the output file that the --encrypt command creates has the same name as the input file, plus a .encrypted suffix. You can use the --suffix parameter to specify a custom suffix.
The secret.txt.encrypted file contains a single, portable, secure encrypted message. The encrypted message includes the encrypted data, an encrypted copy of the data key that encrypted the data, and metadata, including the plaintext encryption context that I provided.
You can manage an encrypted file in any way that you choose, including copying it to an Amazon S3 bucket or archiving it for later use.
Decrypt a file
Now, let’s use the AWS Encryption CLI to decrypt the secret.txt.encrypted file. If you have the required permissions on your master key, you can use any version of the AWS Encryption SDK to decrypt a file that the AWS Encryption CLI encrypted, including the AWS Encryption SDK libraries in Java and Python.
The --decrypt command requires an encrypted message, like the one that the --encrypt command returned, and both --input and --output parameters.
This command has no --master-keys parameter. A --master-keys parameter is required only if you’re not using an AWS KMS CMK.
In this example command, the --input parameter specifies the secret.txt.encrypted file. The --output parameter specifies the current directory, which again is represented by a dot (“.”).
The --encryption-context parameter supplies the same encryption context that was used in the encrypt command. This parameter is not required, but verifying the encryption context during decryption is a cryptographic best practice.
The --metatdata-output parameter tells the command where to write the metadata for the decrypt command. If the file exists, this parameter appends the metadata to the existing file. The AWS Encryption CLI also has parameters that overwrite the metadata file or suppress the metadata.
When it is successful, the decrypt command generates the file of decrypted (plaintext) data, but it does not return any output. To see the results of the decryption command, use a command that gets the content of the file, such as cat or Get-Content.
$ ls
secret.txt secret.txt.encrypted secret.txt.encrypted.decrypted
$ cat secret.txt.encrypted.decrypted
Hello World
The output file that the --decrypt command created has the same name as the input file, plus a .decrypted suffix. The --suffix parameter works on --decrypt commands, too.
Encrypt directories and more
In addition to encrypting and decrypting a single file, you can use the AWS Encryption CLI to encrypt and decrypt strings that you pipe to the CLI, and all or selected files in a directory and its subdirectories, or local or remote volumes. We have examples for you to try in the AWS Encryption SDK documentation.
Our Health Customer Stories page lists just a few of the many customers that are building and running healthcare and life sciences applications that run on AWS. Customers like Verge Health, Care Cloud, and Orion Health trust AWS with Protected Health Information (PHI) and Personally Identifying Information (PII) as part of their efforts to comply with HIPAA and HITECH.
Sixteen More Services In my last HIPAA Eligibility Update I shared the news that we added eight additional services to our list of HIPAA eligible services. Today I am happy to let you know that we have added another sixteen services to the list, bringing the total up to 46. Here are the newest additions, along with some short descriptions and links to some of my blog posts to jog your memory:
Amazon RDS for MariaDB – This service lets you set up scalable, managed MariaDB instances in minutes, and offers high performance, high availability, and a simplified security model that makes it easy for you to encrypt data at rest and in transit. Read Amazon RDS Update – MariaDB is Now Available to learn more.
AWS Batch – This service lets you run large-scale batch computing jobs on AWS. You don’t need to install or maintain specialized batch software or build your own server clusters. Read AWS Batch – Run Batch Computing Jobs on AWS to learn more.
AWS Key Management Service – This service makes it easy for you to create and control the encryption keys used to encrypt your data. It uses HSMs to protect your keys, and is integrated with AWS CloudTrail in order to provide you with a log of all key usage. Read New AWS Key Management Service (KMS) to learn more.
AWS Snowball Edge – This is a data transfer device with 100 terabytes of on-board storage as well as compute capabilities. You can use it to move large amounts of data into or out of AWS, as a temporary storage tier, or to support workloads in remote or offline locations. To learn more, read AWS Snowball Edge – More Storage, Local Endpoints, Lambda Functions.
AWS Snowmobile – This is an exabyte-scale data transfer service. Pulled by a semi-trailer truck, each Snowmobile packs 100 petabytes of storage into a ruggedized 45-foot long shipping container. Read AWS Snowmobile – Move Exabytes of Data to the Cloud in Weeks to learn more (and to see some of my finest LEGO work).
Now that you can reserve seating in AWS re:Invent 2017 breakout sessions, workshops, chalk talks, and other events, the time is right to review the list of introductory, advanced, and expert content being offered this year. To learn more about breakout content types and levels, see Breakout Content.
SID201 – IAM for Enterprises: How Vanguard Strikes the Balance Between Agility, Governance, and Security For Vanguard, managing the creation of AWS Identity and Access Management (IAM) objects is key to balancing developer velocity and compliance. In this session, you learn how Vanguard designs IAM roles to control the blast radius of AWS resources and maintain simplicity for developers. Vanguard will also share best practices to help you manage governance and improve your visibility across your AWS resources.
SID202 – Deep dive about how Capital One automates the delivery of directory services across AWS accounts Traditional solutions for using Microsoft Active Directory across on-premises and AWS Cloud Windows workloads can require complex networking or syncing identities across multiple systems. AWS Directory Service for Microsoft Active Directory, also known as AWS Managed AD, offers you actual Microsoft Active Directory in the AWS Cloud as a managed service. In this session, you will learn how Capital One uses AWS Managed AD to provide highly available authentication and authorization services for its Windows workloads, such as Amazon RDS for SQL Server.
SID205 – Building the Largest Repo for Serverless Compliance-as-Code When you use the cloud to enable speed and agility, how do you know if you’ve done it correctly? We are on a mission to help builders follow industry best practices within security guardrails by creating the largest compliance-as-code repository, available to all. Compliance-as-code is the idea to translate best practices, guardrails, policies, and standards into codified unit testing. Apply this to your AWS environment to provide insights about what can or must be improved. Learn why compliance-as-code matters to gain speed (by getting developers, architects, and security pros on the same page), how it is currently used (demo), and how to start using it or being part of building it.
SID206 – Best Practices for Managing Security Operation on AWS To help prevent unexpected access to your AWS resources, it is critical to maintain strong identity and access policies and track, detect, and react to changes. In this session, you will learn how to use AWS Identity and Access Management (IAM) to control access to AWS resources and integrate your existing authentication system with IAM. We will cover how to deploy and control AWS infrastructure using code templates, including change management policies with AWS CloudFormation.
SID207 – Feedback Security in the Cloud Like many security teams, Riot has been challenged by new paradigms that came with the move to the cloud. We discuss how our security team has developed a security culture based on feedback and self-service to best thrive in the cloud. We detail how the team assessed the security gaps and challenges in our move to AWS, and then describe how the team works within Riot’s unique feedback culture.
SID208 – Less (Privilege) Is More: Getting Least Privilege Right in AWS AWS services are designed to enable control through AWS Identity and Access Management (IAM) and Amazon Virtual Private Cloud (VPC). Join us in this chalk talk to learn how to apply these toward the security principal of least privilege for applications and data and how to practically integrate them in your security operations.
SID209 – Designing and Deploying an AWS Account Factory AWS customers start off with one AWS account, but quickly realize the benefits of having multiple AWS accounts. A common learning curve for customers is how to securely baseline and set up new accounts at scale. This talk helps you understand how to use AWS Organizations, AWS Identity and Access Management (IAM), AWS CloudFormation, and other tools to baseline new accounts, set them up for federation, and make a secure and repeatable account factory to create new AWS accounts. Walk away with demos and tools to use in your own environment.
SID210 – A CISO’s Journey at Vonage: Achieving Unified Security at Scale Making sense of the risks of IT deployments that sit in hybrid environments and span multiple countries is a major challenge. When you add in multiple toolsets and global compliance requirements, including GDPR, it can get overwhelming. Listen to Vonage’s Chief Information Security Officer, Johan Hybinette, share his experiences tackling these challenges.
SID212 – Maximizing Your Move to AWS – Five Key Lessons from Vanguard and Cloud Technology Partners CTP’s Robert Christiansen and Mike Kavis describe how to maximize the value of your AWS initiative. From building a Minimum Viable Cloud to establishing a cloud robust security and compliance posture, we walk through key client success stories and lessons learned. We also explore how CTP has helped Vanguard, the leading provider of investor communications and technology, take advantage of AWS to delight customers, drive new revenue streams, and transform their business.
SID213 – Managing Regulator Expectations – Lessons Learned on Positioning AWS Services from an Audit Perspective Cloud migration in highly regulated industries can stall without a solid understanding of how (and when) to address regulatory expectations. This session provides a guide to explaining the aspects of AWS services that are most frequently the subject of an internal or regulatory audit. Because regulatory agencies and internal auditors might not share a common understanding of the cloud, this session is designed to help you to help them, regardless of their level of technical fluency.
SID214 – Best Security Practices in the Intelligence Community Executives from the Intelligence community discuss cloud security best practices in a field where security is imperative to operations. CIA security cloud chief John Nicely and NGA security cloud chief Scot Kaplan share success stories of migrating mass data to the cloud from a security perspective. Hear how they migrated their IT portfolios while managing their organizations’ unique blend of constraints, budget issues, politics, culture, and security pressures. Learn how these institutions overcame barriers to migration, and ask these panelists what actions you can take to better prepare yourself for the journey of mass migration to the cloud.
SID216 – Defending Diverse Applications Against Common Threats In this session, you learn how to adapt application defenses and operational responses based on your unique requirements. You also hear directly from customers about how they architected their applications on AWS to protect their applications. There are many ways to build secure, high-availability applications in the cloud. Services such as Amazon API Gateway, Amazon VPC, ALB, ELB, and Amazon EC2 are the basic building blocks that enable you to address a wide range of use cases. Best practices for defending your applications against Distributed Denial of Service (DDoS) attacks, exploitation attempts, and bad bots can vary with your choices in architecture.
Advanced level
SID301 – Using AWS Lambda as a Security Team Operating a security practice on AWS brings many new challenges that haven’t been faced in data center environments. The dynamic nature of infrastructure, the relationship between development team members and their applications, and the architecture paradigms have all changed as a result of building software on top of AWS. In this session, learn how your security team can leverage AWS Lambda as a tool to monitor, audit, and enforce your security policies within an AWS environment.
SID302 – Force Multiply Your Security Team with Automation and Alexa Adversaries automate. Who says the good guys can’t as well? By combining AWS offerings like AWS CloudTrail, Amazon Cloudwatch, AWS Config, and AWS Lambda with the power of Amazon Alexa, you can do more security tasks faster, with fewer resources. Force multiplying your security team is all about automation! Last year, we showed off penetration testing at the push of an (AWS IoT) button, and surprise-previewed how to ask Alexa to run Inspector as-needed. Want to see other ways to ask Alexa to be your cloud security sidekick? We have crazy new demos at the ready to show security geeks how to sling security automation solutions for their AWS environments (and impress and help your boss, too).
SID303 – How You Can Use AWS’s Identity Services to be Successful on Your AWS Cloud Journey Every journey to the AWS Cloud is unique. Some customers are migrating existing applications, while others are building new applications using cloud-native services. Along each of these journeys, identity and access management helps customers protect their applications and resources. In this session, you will learn how AWS’s identity services provide you a secure, flexible, and easy solution for managing identities and access on the AWS Cloud. With AWS’’s Identity Services, you do not have to adapt to AWS. Instead, you have a choice of services designed to meet you anywhere along your journey to the AWS Cloud. Every journey to the AWS Cloud is unique. Some customers are migrating existing applications, while others are building new applications using cloud-native services.
SID304 – SecOps 2021 Today: Using AWS Services to Deliver SecOps This talk dives deep on how to build end-to-end security capabilities using AWS. Our goal is orchestrating AWS Security services with other AWS building blocks to deliver enhanced security. We cover working with AWS CloudWatch Events as a queueing mechanism for processing security events, using Amazon DynamoDB to provide a stateful layer to provide tailored response to events and other ancillary functions, using DynamoDB as an attack signature engine, and the use of analytics to derive tailored signatures for detection with AWS Lambda.
SID305 – How CrowdStrike Built a Real-time Security Monitoring Service on AWS The CrowdStrike motto is “We Stop Breaches.” To do that, it needed to build a real-time security monitoring service to detect threats. Join this session to learn how Crowdstrike uses Amazon EC2 and Amazon EBS to help its customers identify vulnerabilities before they become large-scale problems.
SID306 – How Chick-fil-A Embraces DevSecOps on AWS As Chick-fil-A became a cloud-first organization, their security team didn’t want to become the bottleneck for agility. But the security team also wanted to raise the bar for their security posture on AWS. Robert Davis, security architect at Chick-fil-A, provides an overview about how he and his team recognized that writing code was the best way for their security policies to scale across the many AWS accounts that Chick-fil-A operates.
SID307 – Serverless for Security Officers: Paradigm Walkthrough and Comprehensive Security Best Practices For security practitioners, serverless represents a context switch from the familiar servers and networks to a decentralized set of code snippets and AWS platform constructs. This new ecosystem also represents new operational teams, data flows, security tooling, and faster-then-ever change velocity. In this talk, we perform live demos and provide code samples for a wide array of security best practices aligned to industry standards such as NIST 800-53 and ISO 27001.
SID308 – Multi-Account Strategies We will explore a multi-account architecture and how to approach the design/thought process around it. This chalk talk will allow attendees to dive deep into the topic and discuss the nuances of the architecture as well as provide feedback around the approach.
SID309 – Credentials, Credentials, Credentials, Oh My! For new and experienced customers alike, understanding the various credential forms and exchange mechanisms within AWS can be a daunting exercise. In this chalk talk, we clear up the confusion by performing a cartography exercise. We visually depict the right source credentials (for example, enterprise user name and password, IAM keys, AWS STS tokens, and so on) and transformation mechanisms (for example, AssumeRole and so on) to use depending on what you’re trying to do and where you’re coming from.
SID310 – Moving from the Shadows to the Throne What do you do when leadership embraces what was called “shadow IT” as the new path forward? How do you onboard new accounts while simultaneously pushing policy to secure all existing accounts? This session walks through Cisco’s journey consolidating over 700 existing accounts in the Cisco organization, while building and applying Cisco’s new cloud policies.
SID311 – Designing Security and Governance Across a Multi-Account Strategy When organizations plan their journey to cloud adoption at scale, they quickly encounter questions such as: How many accounts do we need? How do we share resources? How do we integrate with existing identity solutions? In this workshop, we present best practices and give you the hands-on opportunity to test and develop best practices. You will work in teams to set up and create an AWS environment that is enterprise-ready for application deployment and integration into existing operations, security, and procurement processes. You will get hands-on experience with cross-account roles, consolidated logging, account governance and other challenges to solve.
SID312 – DevSecOps Capture the Flag In this Capture the Flag workshop, we divide groups into teams and work on AWS CloudFormation DevSecOps. The AWS Red Team supplies an AWS DevSecOps Policy that needs to be enforced via CloudFormation static analysis. Participant Blue Teams are provided with an AWS Lambda-based reference architecture to be used to inspect CloudFormation templates against that policy. Interesting items need to be logged, and made visible via ChatOps. Dangerous items need to be logged, and recorded accurately as a template fail. The secondary challenge is building a CloudFormation template to thwart the controls being created by the other Blue teams.
SID313 – Continuous Compliance on AWS at Scale In cloud migrations, the cloud’s elastic nature is often touted as a critical capability in delivering on key business initiatives. However, you must account for it in your security and compliance plans or face some real challenges. Always counting on a virtual host to be running, for example, causes issues when that host is rebooted or retired. Managing security and compliance in the cloud is continuous, requiring forethought and automation. Learn how a leading, next generation managed cloud provider uses automation and cloud expertise to manage security and compliance at scale in an ever-changing environment.
SID314 – IAM Policy Ninja Are you interested in learning how to control access to your AWS resources? Have you wondered how to best scope permissions to achieve least-privilege permissions access control? If your answer is “yes,” this session is for you. We look at the AWS Identity and Access Management (IAM) policy language, starting with the basics of the policy language and how to create and attach policies to IAM users, groups, and roles. We explore policy variables, conditions, and tools to help you author least privilege policies. We cover common use cases, such as granting a user secure access to an Amazon S3 bucket or to launch an Amazon EC2 instance of a specific type.
SID315 – Security and DevOps: Agility and Teamwork In this session, you learn pragmatic steps to integrate security controls into DevOps processes in your AWS environment at scale. Cybersecurity expert and founder of Alert Logic Misha Govshteyn shares insights from high performing teams who are embracing the reality that an agile security program can enable faster and more secure workload deployments. Joining Misha is Joey Peloquin, Director of Cloud Security Operations at Citrix, who discusses Citrix’s DevOps experiences and how they manage their cybersecurity posture within the AWS Cloud. Session sponsored by Alert Logic.
SID316 – Using Access Advisor to Strike the Balance Between Security and Usability AWS provides a killer feature for security operations teams: Access Advisor. In this session, we discuss how Access Advisor shows the services to which an IAM policy grants access and provides a timestamp for the last time that the role authenticated against that service. At Netflix, we use this valuable data to automatically remove permissions that are no longer used. By continually removing excess permissions, we can achieve a balance of empowering developers and maintaining a best-practice, secure environment.
SID317 – Automating Security and Compliance Testing of Infrastructure-as-Code for DevSecOps Infrastructure-as-Code (IaC) has emerged as an essential element of organizational DevOps practices. Tools such as AWS CloudFormation and Terraform allow software-defined infrastructure to be deployed quickly and repeatably to AWS. But the agility of CI/CD pipelines also creates new challenges in infrastructure security hardening. This session provides a foundation for how to bring proven software hardening practices into the world of infrastructure deployment. We discuss how to build security and compliance tests for infrastructure analogous to unit tests for application code, and showcase how security, compliance and governance testing fit in a modern CI/CD pipeline.
SID318 – From Obstacle to Advantage: The Changing Role of Security & Compliance in Your Organization A surprising trend is starting to emerge among organizations who are progressing through the cloud maturity lifecycle: major improvements in revenue growth, customer satisfaction, and mission success are being directly attributed to improvements in security and compliance. At one time thought of as speed bumps in the path to deployment, security and compliance are now seen as critical ingredients that help organizations differentiate their offerings in the market, win more deals, and achieve mission-critical goals faster. This session explores how organizations like Jive Software and the National Geospatial Agency use the Evident Security Platform, AWS, and AWS Quick Starts to automate security and compliance processes in their organization to accomplish more, do it faster, and deliver better results.
SID319 – Incident Response in the Cloud In this session, we walk you through a hypothetical incident response managed on AWS. Learn how to apply existing best practices as well as how to leverage the unique security visibility, control, and automation that AWS provides. We cover how to set up your AWS environment to prevent a security event and how to build a cloud-specific incident response plan so that your organization is prepared before a security event occurs. This session also covers specific environment recovery steps available on AWS.
SID320 – Fraud Prevention, Detection, Lessons Learned, and Best Practices Fighting fraud means countering human actors that quickly adapt to whatever you do to stop them. In this presentation, we discuss the key components of a fraud prevention program in the cloud. Additionally, we provide techniques for detecting known and unknown fraud activity and explore different strategies for effectively preventing detected patterns. Finally, we discuss lessons learned from our own prevention activities as well as the best practices that you can apply to manage risk.
SID321 – How Capital One Applies AWS Organizations Best Practices to Manage Multiple AWS Accounts In this session, we review best practices for managing multiple AWS accounts using AWS Organizations. We cover how to think about the master account and your account strategy, as well as how to roll out changes. You learn how Capital One applies these best practices to manage its AWS accounts, which number over 160, and PCI workloads.
SID322 – The AWS Philosophy of Security AWS distinguished engineer Eric Brandwine speaks with hundreds of customers each year, and noticed one question coming up more than any other, “How does AWS operationalize its own security?” In this session, Eric details both strategic and tactical considerations, along with an insider’s look at AWS tooling and processes.
SID324 – Automating DDoS Response in the Cloud If left unmitigated, Distributed Denial of Service (DDoS) attacks have the potential to harm application availability or impair application performance. DDoS attacks can also act as a smoke screen for intrusion attempts or as a harbinger for attacks against non-cloud infrastructure. Accordingly, it’s crucial that developers architect for DDoS resiliency and maintain robust operational capabilities that allow for rapid detection and engagement during high-severity events. In this session, you learn how to build a DDoS-resilient application and how to use services like AWS Shield and Amazon CloudWatch to defend against DDoS attacks and automate response to attacks in progress.
SID325 – Amazon Macie: Data Visibility Powered by Machine Learning for Security and Compliance Workloads In this session, Edmunds discusses how they create workflows to manage their regulated workloads with Amazon Macie, a newly-released security and compliance management service that leverages machine learning to classify your sensitive data and business-critical information. Amazon Macie uses recurrent neural networks (RNN) to identify and alert potential misuse of intellectual property. They do a deep dive into machine learning within the security ecosystem.
SID326 – AWS Security State of the Union Steve Schmidt, chief information security officer at AWS, addresses the current state of security in the cloud, with a particular focus on feature updates, the AWS internal “secret sauce,” and what’s on horizon in terms of security, identity, and compliance tooling.
SID327 – How Zocdoc Achieved Security and Compliance at Scale With Infrastructure as Code In less than 12 months, Zocdoc became a cloud-first organization, diversifying their tech stack and liberating data to help drive rapid product innovation. Brian Lozada, CISO at Zocdoc, and Zhen Wang, Director of Engineering, provide an overview on how their teams recognized that infrastructure as code was the most effective approach for their security policies to scale across their AWS infrastructure. They leveraged tools such as AWS CloudFormation, hardened AMIs, and hardened containers. The use of DevSecOps within Zocdoc has enhanced data protection with the use of AWS services such as AWS KMS and AWS CloudHSM and auditing capabilities, and event-based policy enforcement with Amazon Elasticsearch Service and Amazon CloudWatch, all built on top of AWS.
SID328 – Cloud Adoption in Regulated Financial Services Macquarie, a global provider of financial services, identified early on that it would require strong partnership between its business, technology and risk teams to enable the rapid adoption of AWS cloud technologies. As a result, Macquarie built a Cloud Governance Platform to enable its risk functions to move as quickly as its development teams. This platform has been the backbone of Macquarie’s adoption of AWS over the past two years and has enabled Macquarie to accelerate its use of cloud technologies for the benefit of clients across multiple global markets. This talk will outline the strategy that Macquarie embarked on, describe the platform they built, and provide examples of other organizations who are on a similar journey.
SID329 – A Deep Dive into AWS Encryption Services AWS Encryption Services provide an easy and cost-effective way to protect your data in AWS. In this session, you learn about leveraging the latest encryption management features to minimize risk for your data.
SID330 – Best Practices for Implementing Your Encryption Strategy Using AWS Key Management Service AWS Key Management Service (KMS) is a managed service that makes it easy for you to create and manage the encryption keys used to encrypt your data. In this session, we will dive deep into best practices learned by implementing AWS KMS at AWS’s largest enterprise clients. We will review the different capabilities described in the AWS Cloud Adoption Framework (CAF) Security Perspective and how to implement these recommendations using AWS KMS. In addition to sharing recommendations, we will also provide examples that will help you protect sensitive information on the AWS Cloud.
SID331 – Architecting Security and Governance Across a Multi-Account Strategy Whether it is per business unit or per application, many AWS customers use multiple accounts to meet their infrastructure isolation, separation of duties, and billing requirements. In this session, we discuss considerations, limitations, and security patterns when building out a multi-account strategy. We explore topics such as identity federation, cross-account roles, consolidated logging, and account governance. Thomson Reuters shared their journey and their approach to a multi-account strategy. At the end of the session, we present an enterprise-ready, multi-account architecture that you can start leveraging today.
SID332 – Identity Management for Your Users and Apps: A Deep Dive on Amazon Cognito Learn how to set up an end-user directory, secure sign-up and sign-in, manage user profiles, authenticate and authorize your APIs, federate from enterprise and social identity providers, and use OAuth to integrate with your app—all without any server setup or code. With clear blueprints, we show you how to leverage Amazon Cognito to administer and secure your end users and enable identity for the applied patterns of mobile, web, and enterprise apps.
SID334 – How Amazon Business Uses Amazon Cloud Directory as the Data Store for Its Account Management Platform Join the Amazon Business team to learn how it uses Amazon Cloud Directory as the data store for its account management platform. You will learn how Amazon Business uses Amazon DynamoDB with Cloud Directory to manage user authorization and business process workflows. You also will learn how Cloud Directory helps to manage hierarchical datasets and how to get started modeling these datasets in Cloud Directory.
SID335 – Implementing Security and Governance across a Multi-Account Strategy As existing or new organizations expand their AWS footprint, managing multiple accounts while maintaining security quickly becomes a challenge. In this chalk talk, we will demonstrate how AWS Organizations, IAM roles, identity federation, and cross-account manager can be combined to build a scalable multi-account management platform. By the end of this session, attendees will have the understanding and deployment patterns to bring a secure, flexible and automated multi-account management platform to their organizations.
SID336 – Use AWS to Effectively Manage GDPR Compliance The General Data Protection Regulation (GDPR) is considered to be the most stringent privacy regulation ever enacted. Complying with GDPR could be a challenge for organizations, and AWS services can help get you ahead of the May 2018 enforcement deadline. In this chalk talk, the Legal and Compliance GDPR leadership at AWS discusses what enforcement of GDPR might mean for you and your customer’s compliance programs.
SID337 – Best Practices for Managing Access to AWS Resources Using IAM Roles In this chalk talk, we discuss why using temporary security credentials to manage access to your AWS resources is an AWS Identity and Access Management (IAM) best practice. IAM roles help you follow this best practice by delivering and rotating temporary credentials automatically. We discuss the different types of IAM roles, the assume role functionality, and how to author fine-grained trust and access policies that limit the scope of IAM roles. We then show you how to attach IAM roles to your AWS resources, such as Amazon EC2 instances and AWS Lambda functions. We also discuss migrating applications that use long-term AWS access keys to temporary credentials managed by IAM roles.
SID338 – [email protected] Once a customer achieves success with using AWS in a few pilot projects, most look to rapidly adopt an “all-in” enterprise migration strategy. Along this journey, several new challenges emerge that quickly become blockers and slow down migrations if they are not addressed properly. At this scale, customers will deal with the governance of hundreds of accounts, as well as thousands of IT resources residing within those accounts. Humans and traditional IT management processes cannot scale at the same pace and inevitably challenging questions emerge. In this session, we discuss those questions about governance at scale.
SID339 – Deep Dive on AWS CloudHSM Organizations building applications that handle confidential or sensitive data are subject to many types of regulatory requirements and often rely on hardware security modules (HSMs) to provide validated control of encryption keys and cryptographic operations. AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud using FIPS 140-2 Level 3 validated HSMs. This chalk talk will provide you a deep-dive on CloudHSM, and demonstrate how you can quickly and easily use CloudHSM to help secure your data and meet your compliance requirements.
SID340 – Using Infrastructure as Code to Inject Security Best Practices as Part of the Software Deployment Lifecycle A proactive approach to security is key to securing your applications as part of software deployment. In this chalk talk, T. Rowe Price, a financial asset management institution, outlines how they built their security automation process in enabling their numerous developer teams to rapidly and securely build and deploy applications at scale on AWS. Learn how they use services like AWS Identity and Access Management (IAM), HashiCorp tools, Terraform for automation, and Vault for secrets management, and incorporate certificate management and monitoring as part of the deployment process. T. Rowe Price discusses lessons learned and best practices to move from a tightly controlled legacy environment to an agile, automated software development process on AWS.
SID341 – Using AWS CloudTrail Logs for Scalable, Automated Anomaly Detection This workshop gives you an opportunity to develop a solution that can continuously monitor for and detect a realistic threat by analyzing AWS CloudTrail log data. Participants are provided with a CloudTrail data source and some clues to get started. Then you have to design a system that can process the logs, detect the threat, and trigger an alarm. You can make use of any AWS services that can assist in this endeavor, such as AWS Lambda for serverless detection logic, Amazon CloudWatch or Amazon SNS for alarming and notification, Amazon S3 for data and configuration storage, and more.
SID342 – Protect Your Web Applications from Common Attack Vectors Using AWS WAF As attacks and attempts to exploit vulnerabilities in web applications become more sophisticated, having an effective web request filtering solution becomes key to keeping your users’ data safe. In this workshop, discover how the OWASP Top 10 list of application security risks can help you secure your web applications. Learn how to use AWS services, such as AWS WAF, to mitigate vulnerabilities. This session includes hands-on labs to help you build a solution. Key learning goals include understanding the breadth and complexity of vulnerabilities customers need to protect from, understanding the AWS tools and capabilities that can help mitigate vulnerabilities, and learning how to configure effective HTTP request filtering rules using AWS WAF.
SID343 – User Management and App Authentication with Amazon Cognito Are you curious about how to authenticate and authorize your applications on AWS? Have you thought about how to integrate AWS Identity and Access Management (IAM) with your app authentication? Have you tried to integrate third-party SAML providers with your app authentication? Look no further. This workshop walks you through step by step to configure and create Amazon Cognito user pools and identity pools. This workshop presents you with the framework to build an application using Java, .NET, and serverless. You choose the stack and build the app with local users. See the service being used not only with mobile applications but with other stacks that normally don’t include Amazon Cognito.
SID344 – Soup to Nuts: Identity Federation for AWS AWS offers customers multiple solutions for federating identities on the AWS Cloud. In this session, we will embark on a tour of these solutions and the use cases they support. Along the way, we will dive deep with demonstrations and best practices to help you be successful managing identities on the AWS Cloud. We will cover how and when to use Security Assertion Markup Language 2.0 (SAML), OpenID Connect (OIDC), and other AWS native federation mechanisms. You will learn how these solutions enable federated access to the AWS Management Console, APIs, and CLI, AWS Infrastructure and Managed Services, your web and mobile applications running on the AWS Cloud, and much more.
SID345 – AWS Encryption SDK: The Busy Engineer’s Guide to Client-Side Encryption You know you want client-side encryption for your service but you don’t know exactly where to start. Join us for a hands-on workshop where we review some of your client-side encryption options and explore implementing client-side encryption using the AWS Encryption SDK. In this session, we cover the basics of client-side encryption, perform encrypt and decrypt operations using AWS KMS and the AWS Encryption SDK, and discuss security and performance considerations when implementing client-side encryption in your service.
Expert level
SID401 – Let’s Dive Deep Together: Advancing Web Application Security Beginning with a recap of best practices in CloudFront, AWS WAF, Route 53, and Amazon VPC security, we break into small teams to work together on improving the security of a typical web application. How can we creatively use the services? What additional features would help us? This technically advanced chalk talk requires certification at the solutions architect associate level or greater.
SID402 – An AWS Security Odyssey: Implementing Security Controls in the World of Internet, Big Data, IoT and E-Commerce Platforms This workshop will give participants the opportunity to take a security-focused journey across various AWS services and implement automated controls along the way. You will learn how to apply AWS security controls to services such as Amazon EC2, Amazon S3, AWS Lambda, and Amazon VPC. In short, you will learn how to use the cloud to protect the cloud. We will talk about how to: Adopt a workload-centric approach to your security strategy, Address security issues in a cost-effective manner Automate your security responses to promote maturity and auditability. In order to complete this workshop, attendees will need a laptop with wireless access, an AWS account and an IAM user that has full administrative privileges within their account. AWS credits will be provided as attendees depart the session to cover the cost of running the workshop in their own account.
SID404 – Amazon Inspector – Automating the “Sec” in DevSecOps Adopting DevSecOps can be challenging using traditional security tools that are designed for on-premises infrastructure. Amazon Inspector is an automated security assessment service that helps you adopt DevSecOps by integrating security assessments directly into the development process of applications running on Amazon EC2. We dive deep on how to use Inspector to automate host security assessments. We show you how to integrate Inspector with other AWS Cloud services to provide automated security assessments throughout your development process. We demo installing the AWS agent, setting up assessment targets and templates, and running assessments. We review the findings and discuss how you can automate the management and remediation of those findings with your available AWS services.
SID405 – Five New Security Automation Improvements You Can Make by Using Amazon CloudWatch Events and AWS Config Rules This presentation will include a deep dive into the code behind multiple security automation and remediation functions. This session will consider potential use cases, as well as feature a demonstration of a proposed script, and then walk through the code set to explain the various challenges and solutions of the intended script. All examples of code will be previously unreleased and will feature integration with services such as Trusted Advisor and Macie. All code will be released as OSS after re:Invent.
As part of the AWS Online Tech Talks series, AWS will present CloudHSM – Secure, Scalable Key Storage in AWS on Wednesday, October 25. This tech talk will start at 9:00 A.M. Pacific Time and end at 9:40 A.M. Pacific Time.
Applications handling confidential or sensitive data are subject to corporate or regulatory requirements and therefore need validated control of encryption keys and cryptographic operations. AWS CloudHSM brings to your AWS resources the security and control of traditional HSMs. This Tech Talk will show how you can leverage CloudHSM to build scalable, reliable applications without sacrificing either security or performance. Attend this Tech Talk to learn how you can use CloudHSM to quickly and easily build secure, compliant, fast, and flexible applications.
You also will:
Learn about the challenges CloudHSM can help you address.
Understand how CloudHSM can secure your workloads and data.
New rules give the DHS permission to collect “social media handles, aliases, associated identifiable information, and search results” as part of people’s immigration file. The Federal Register has the details, which seems to also include US citizens that communicate with immigrants.
This is part of the general trend to srcrutinize people coming into the US more, but it’s hard to get too worked up about the DHS accessing publicly available information. More disturbing is the trend of occasonally asking for social media passwords at the border.
AWS recently released the first guide in the new AWS Government Handbook Series: Secure Network Connections: An evaluation of the US Trusted Internet Connections program. This new series examines key cybersecurity policy initiatives that have been operating in the traditional IT space, unpacks their security objectives, and identifies lessons learned and best practices of global government first movers and early adopters seeking to achieve the initiative’s security outcomes in the cloud.
In particular, “Secure Network Connections” provides guidance to government policy makers on AWS’s position and recommendations for establishing cloud-based network perimeter monitoring capabilities. Note that this guidance can be applied to any organization that requires centralized perimeter network monitoring. The guide also summarizes lessons learned from AWS’s work with the US Department of Homeland Security (DHS) through an analysis of its federal secure network connections program, Trusted Internet Connections (TIC).
If you have questions or comments about this new guide, submit them in the “Comments” section below. And note that the next guide in this series will be published later this year.
Whew – what a week! Tara, Randall, Ana, and I have been working around the clock to create blog posts for the announcements that we made at the AWS Summit in New York. Here’s a summary to help you to get started:
Amazon Macie – This new service helps you to discover, classify, and secure content at scale. Powered by machine learning and making use of Natural Language Processing (NLP), Macie looks for patterns and alerts you to suspicious behavior, and can help you with governance, compliance, and auditing. You can read Tara’s post to see how to put Macie to work; you select the buckets of interest, customize the classification settings, and review the results in the Macie Dashboard.
AWS Glue – Randall’s post (with deluxe animated GIFs) introduces you to this new extract, transform, and load (ETL) service. Glue is serverless and fully managed, As you can see from the post, Glue crawls your data, infers schemas, and generates ETL scripts in Python. You define jobs that move data from place to place, with a wide selection of transforms, each expressed as code and stored in human-readable form. Glue uses Development Endpoints and notebooks to provide you with a testing environment for the scripts you build. We also announced that Amazon Athena now integrates with Amazon Glue, as does Apache Spark and Hive on Amazon EMR.
AWS Migration Hub – This new service will help you to migrate your application portfolio to AWS. My post outlines the major steps and shows you how the Migration Hub accelerates, tracks,and simplifies your migration effort. You can begin with a discovery step, or you can jump right in and migrate directly. Migration Hub integrates with tools from our migration partners and builds upon the Server Migration Service and the Database Migration Service.
CloudHSM Update – We made a major upgrade to AWS CloudHSM, making the benefits of hardware-based key management available to a wider audience. The service is offered on a pay-as-you-go basis, and is fully managed. It is open and standards compliant, with support for multiple APIs, programming languages, and cryptography extensions. CloudHSM is an integral part of AWS and can be accessed from the AWS Management Console, AWS Command Line Interface (CLI), and through API calls. Read my post to learn more and to see how to set up a CloudHSM cluster.
Managed Rules to Secure S3 Buckets – We added two new rules to AWS Config that will help you to secure your S3 buckets. The s3-bucket-public-write-prohibited rule identifies buckets that have public write access and the s3-bucket-public-read-prohibited rule identifies buckets that have global read access. As I noted in my post, you can run these rules in response to configuration changes or on a schedule. The rules make use of some leading-edge constraint solving techniques, as part of a larger effort to use automated formal reasoning about AWS.
CloudTrail for All Customers – Tara’s post revealed that AWS CloudTrail is now available and enabled by default for all AWS customers. As a bonus, Tara reviewed the principal benefits of CloudTrail and showed you how to review your event history and to deep-dive on a single event. She also showed you how to create a second trail, for use with CloudWatch CloudWatch Events.
Encryption of Data at Rest for EFS – When you create a new file system, you now have the option to select a key that will be used to encrypt the contents of the files on the file system. The encryption is done using an industry-standard AES-256 algorithm. My post shows you how to select a key and to verify that it is being used.
Watch the Keynote My colleagues Adrian Cockcroft and Matt Wood talked about these services and others on the stage, and also invited some AWS customers to share their stories. Here’s the video:
Our customers run an incredible variety of mission-critical workloads on AWS, many of which process and store sensitive data. As detailed in our Overview of Security Processes document, AWS customers have access to an ever-growing set of options for encrypting and protecting this data. For example, Amazon Relational Database Service (RDS) supports encryption of data at rest and in transit, with options tailored for each supported database engine (MySQL, SQL Server, Oracle, MariaDB, PostgreSQL, and Aurora).
Many customers use AWS Key Management Service (KMS) to centralize their key management, with others taking advantage of the hardware-based key management, encryption, and decryption provided by AWS CloudHSM to meet stringent security and compliance requirements for their most sensitive data and regulated workloads (you can read my post, AWS CloudHSM – Secure Key Storage and Cryptographic Operations, to learn more about Hardware Security Modules, also known as HSMs).
Major CloudHSM Update Today, building on what we have learned from our first-generation product, we are making a major update to CloudHSM, with a set of improvements designed to make the benefits of hardware-based key management available to a much wider audience while reducing the need for specialized operating expertise. Here’s a summary of the improvements:
Pay As You Go – CloudHSM is now offered under a pay-as-you-go model that is simpler and more cost-effective, with no up-front fees.
Fully Managed – CloudHSM is now a scalable managed service; provisioning, patching, high availability, and backups are all built-in and taken care of for you. Scheduled backups extract an encrypted image of your HSM from the hardware (using keys that only the HSM hardware itself knows) that can be restored only to identical HSM hardware owned by AWS. For durability, those backups are stored in Amazon Simple Storage Service (S3), and for an additional layer of security, encrypted again with server-side S3 encryption using an AWS KMS master key.
Open & Compatible – CloudHSM is open and standards-compliant, with support for multiple APIs, programming languages, and cryptography extensions such as PKCS #11, Java Cryptography Extension (JCE), and Microsoft CryptoNG (CNG). The open nature of CloudHSM gives you more control and simplifies the process of moving keys (in encrypted form) from one CloudHSM to another, and also allows migration to and from other commercially available HSMs.
More Secure – CloudHSM Classic (the original model) supports the generation and use of keys that comply with FIPS 140-2 Level 2. We’re stepping that up a notch today with support for FIPS 140-2 Level 3, with security mechanisms that are designed to detect and respond to physical attempts to access or modify the HSM. Your keys are protected with exclusive, single-tenant access to tamper-resistant HSMs that appear within your Virtual Private Clouds (VPCs). CloudHSM supports quorum authentication for critical administrative and key management functions. This feature allows you to define a list of N possible identities that can access the functions, and then require at least M of them to authorize the action. It also supports multi-factor authentication using tokens that you provide.
AWS-Native – The updated CloudHSM is an integral part of AWS and plays well with other tools and services. You can create and manage a cluster of HSMs using the AWS Management Console, AWS Command Line Interface (CLI), or API calls.
Diving In You can create CloudHSM clusters that contain 1 to 32 HSMs, each in a separate Availability Zone in a particular AWS Region. Spreading HSMs across AZs gives you high availability (including built-in load balancing); adding more HSMs gives you additional throughput. The HSMs within a cluster are kept in sync: performing a task or operation on one HSM in a cluster automatically updates the others. Each HSM in a cluster has its own Elastic Network Interface (ENI).
All interaction with an HSM takes place via the AWS CloudHSM client. It runs on an EC2 instance and uses certificate-based mutual authentication to create secure (TLS) connections to the HSMs.
At the hardware level, each HSM includes hardware-enforced isolation of crypto operations and key storage. Each customer HSM runs on dedicated processor cores.
Setting Up a Cluster Let’s set up a cluster using the CloudHSM Console:
I click on Create cluster to get started, select my desired VPC and the subnets within it (I can also create a new VPC and/or subnets if needed):
Then I review my settings and click on Create:
After a few minutes, my cluster exists, but is uninitialized:
Initialization simply means retrieving a certificate signing request (the Cluster CSR):
And then creating a private key and using it to sign the request (these commands were copied from the Initialize Cluster docs and I have omitted the output. Note that ID identifies the cluster):
The next step is to apply the signed certificate to the cluster using the console or the CLI. After this has been done, the cluster can be activated by changing the password for the HSM’s administrative user, otherwise known as the Crypto Officer (CO).
Once the cluster has been created, initialized and activated, it can be used to protect data. Applications can use the APIs in AWS CloudHSM SDKs to manage keys, encrypt & decrypt objects, and more. The SDKs provide access to the CloudHSM client (running on the same instance as the application). The client, in turn, connects to the cluster across an encrypted connection.
Available Today The new HSM is available today in the US East (Northern Virginia), US West (Oregon), US East (Ohio), and EU (Ireland) Regions, with more in the works. Pricing starts at $1.45 per HSM per hour.
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.