Tag Archives: encryption

Improve clinical trial outcomes by using AWS technologies

Post Syndicated from Mayank Thakkar original https://aws.amazon.com/blogs/big-data/improve-clinical-trial-outcomes-by-using-aws-technologies/

We are living in a golden age of innovation, where personalized medicine is making it possible to cure diseases that we never thought curable. Digital medicine is helping people with diseases get healthier, and we are constantly discovering how to use the body’s immune system to target and eradicate cancer cells. According to a report published by ClinicalTrials.gov, the number of registered studies hit 293,000 in 2018, representing a 250x growth since 2000.

However, an internal rate of return (IRR) analysis by Endpoints News, using data from EvaluatePharma, highlights some interesting trends. A flourishing trend in pharma innovation is supported by strong growth in registered studies. However, the IRR shows a rapidly declining trend, from around 17 percent in 2000 to below the cost of capital in 2017 and projected to go to 0 percent by 2020.

This blog post is the first installment in a series that focuses on the end-to-end workflow of collecting, storing, processing, visualizing, and acting on clinical trials data in a compliant and secure manner. The series also discusses the application of artificial intelligence and machine learning technologies to the world of clinical trials. In this post, we highlight common architectural patterns that AWS customers use to modernize their clinical trials. These incorporate mobile technologies for better evidence generation, cost reduction, increasing quality, improving access, and making medicine more personalized for patients.

Improving the outcomes of clinical trials and reducing costs

Biotech and pharma organizations are feeling the pressure to use resources as efficiently as possible. This pressure forces them to search for any opportunity to streamline processes, get faster, and stay more secure, all while decreasing costs. More and more life sciences companies are targeting biologics, CAR-T, and precision medicine therapeutics, with focus shifting towards smaller, geographically distributed patient segments. This shift has resulted in an increasing mandate to capture data from previously unavailable, nontraditional sources. These sources include mobile devices, IoT devices, and in-home and clinical devices. Life sciences companies merge data from these sources with data from traditional clinical trials to build robust evidence around the safety and efficacy of a drug.

Early last year, the Clinical Trials Transformation Initiative (CTTI) provided recommendations about using mobile technologies for capturing holistic, high quality, attributable, real-world data from patients and for submission to the U.S. Food and Drug Administration (FDA). By using mobile technologies, life sciences companies can reduce barriers to trial participation and lower costs associated with conducting clinical trials. Global regulatory groups such as the FDA, Health Canada, and Medicines and Healthcare products Regulatory Agency (MHRA), among others, are also in favor of using mobile technologies. Mobile technologies can make patient recruitment more efficient, reach endpoints faster, and reduce the cost and time required to conduct clinical trials.

Improvised data ingestion using mobile technologies can speed up outcomes, reduce costs, and improve the accuracy of clinical trials. This is especially true when mobile data ingestion is supplemented with artificial intelligence and machine learning (AI/ML) technologies.

Together, they can usher in a new age of smart clinical trials.

At the same time, traditional clinical trial processes and technology designed for mass-marketed blockbuster drugs can’t effectively meet emerging industry needs. This leaves life sciences and pharmaceutical companies in need of assistance for evolving their clinical trial operations. These circumstances result in making clinical trials one of the largest areas of investment for bringing a new drug to market.

Using mobile technologies with traditional technologies in clinical trials can improve the outcomes of the trials and simultaneously reduce costs. Some of the use cases that the integration of various technologies enables include these:

  • Identifying and tracking participants in clinical trials
    • Identifying participants for clinical trials recruitment
    • Educating and informing patients participating in clinical trials
    • Implementing standardized protocols and sharing associated information to trial participants
    • Tracking adverse events and safety profiles
  • Integrating genomic and phenotypic data for identifying novel biomarkers
  • Integrating mobile data into clinical trials for better clinical trial management
  • Creation of a patient-control arm based on historical data
  • Stratifying cohorts based on treatment, claims, and registry datasets
  • Building a collaborative, interoperable network for data sharing and knowledge creation
  • Building compliance-ready infrastructure for clinical trial management

The AWS Cloud provides HIPAA eligible services and solutions. As an AWS customer, you can use these to build solutions for global implementation of mobile devices and sensors in trials, secure capture of streaming Internet of Things (IoT) data, and advanced analytics through visualization tools or AI/ML capabilities. Some of the use cases these services and solutions enable are finding and recruiting patients using smart analytics, facilitating global data management, and remote or in-patient site monitoring. Others include predicting lack of adherence, detecting adverse events, and accelerating trial outcomes along with optimizing trial costs.

Clinical Trials 2.0 (CT2.0) at AWS is geared toward facilitating wider adoption of cloud-native services to enable data ingestion from disparate sources, cost-optimized and reliable storage, and holistic analytics. At the same time, CT2.0 provides the granular access control, end-to-end security, and global scalability needed to conduct clinical trials more efficiently.

Reference architecture

One of the typical architectures for managing a clinical trial using mobile technologies is shown following. This architecture focuses on capturing real-time data from mobile sources and providing a way to process it.

* – Additional considerations such as data security, access control, and compliance need to be incorporated into the architecture and are discussed in the remainder of this post.

Managing a trial by using this architecture consists of the following five major steps.

Step 1: Collect data

Mobile devices, personal wearables, instruments, and smart-devices are extensively being used (or being considered) by global pharmaceutical companies in patient care and clinical trials to provide data for activity tracking, vital signs monitoring, and so on, in real-time. Devices like infusion pumps, personal use dialysis machines, and so on require tracking and alerting of device consumables and calibration status. Remote settings management is also a major use case for these kinds of devices. The end-user mobile devices used in the clinical trial emit a lot of telemetry data that requires real-time data capture, data cleansing, transformation, and analysis.

Typically, these devices are connected to an edge node or a smart phone. Such a connection provides sufficient computing resources to stream data to AWS IoT Core. AWS IoT Core can then be configured to write data to Amazon Kinesis Data Firehose in near real time. Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon S3. S3 provides online, flexible, cost efficient, pay-as-you-go storage that can replicate your data on three Availability Zones within an AWS Region. The edge node or smart phone can use the AWS IoT SDKs to publish and subscribe device data to AWS IoT Core with MQTT. This process uses the AWS method of authentication (called ‘SigV4’), X.509 certificate–based authentication, and customer-created token-based authentication (through custom authorizers). This authenticated approach enables you to map your choice of policies to each certificate and remotely control device or application access. You can also use the Kinesis Data Firehose encryption feature to enable server-side data encryption.

You can also capture additional data such as Case Report Forms (CRF), Electronic Medical Records (EMR), and medical images using Picture Archiving and Communication Systems (PACS). In addition, you can capture laboratory data (Labs) and other Patient Reported Outcomes data (ePRO). AWS provides multiple tools and services to effectively and securely connect to these data sources, enabling you to ingest data in various volumes, variety, and velocities. For more information about creating a HealthCare Data Hub and ingesting Digital Imaging and Communications in Medicine (DICOM) data, see the AWS Big Data Blog post Create a Healthcare Data Hub with AWS and Mirth Connect.

Step 2: Store data

After data is ingested from the devices and wearables used in the clinical trial, Kinesis Data Firehose is used to store the data on Amazon S3. This stored data serves as a raw copy and can later be used for historical analysis and pattern prediction. Using Amazon S3’s lifecycle policies, you can periodically move your data to reduced cost storage such as Amazon S3 Glacier for further optimizing their storage costs. Using Amazon S3 Intelligent Tiering can automatically optimize costs when data access patterns change, without performance impact or operational overhead by moving data between two access tiers—frequent access and infrequent access. You can also choose to encrypt data at rest and in motion using various encryption options available on S3.

Amazon S3 offers an extremely durable, highly available, and infinitely scalable data storage infrastructure, simplifying most data processing, backup, and replication tasks.

Step 3: Data processingfast lane

After collecting and storing a raw copy of the data, Amazon S3 is configured to publish events to AWS Lambda and invoke a Lambda function by passing the event data as a parameter. The Lambda function is used to extract the key performance indicators (KPIs) such as adverse event notifications, medication adherence, and treatment schedule management from the incoming data. You can use Lambda to process these KPIs and store them in Amazon DynamoDB, along with encryption at rest, which powers a near-real-time clinical trial status dashboard. This alerts clinical trial coordinators in real time so that appropriate interventions can take place.

In addition to this, using a data warehouse full of medical records, you can train and implement a machine learning model. This model can predict which patients are about to switch medications or might exhibit adherence challenges in the future. Such prediction can enable clinical trial coordinators to narrow in on those patients with mitigation strategies.

Step 4: Data processing—batch

For historical analysis and pattern prediction, the staged data (stored in S3) is processed in batches. A Lambda function is used to trigger the extract, transform, and load (ETL) process every time new data is added to the raw data S3 bucket. This Lambda function triggers an ETL process using AWS Glue, a fully managed ETL service that makes it easy for you to prepare and load your data for analytics. This approach helps in mining current and historical data to derive actionable insights, which is stored on Amazon S3.

From there, data is loaded on to Amazon Redshift, a cost-effective, petabyte-scale data warehouse offering from AWS. You can also use Amazon Redshift Spectrum to extend data warehousing out to exabytes without loading any data to Amazon Redshift, as detailed in the Big Data blog post Amazon Redshift Spectrum Extends Data Warehousing Out to Exabytes—No Loading Required. This enables you to provide an all-encompassing picture of the entire clinical trial to your clinical trial coordinators, enabling you to react and respond faster.

In addition to this, you can train and implement a machine learning model to identify patients who might be at risk for adherence challenges. This enables clinical trial coordinators to reinforce patient education and support.

Step 5: Visualize and act on data

After the data is processed and ready to be consumed, you can use Amazon QuickSight, a cloud-native business intelligence service from AWS that offers native Amazon Redshift connectivity. Amazon QuickSight is serverless and so can be rolled out to your audiences in hours. You can also use a host of third-party reporting tools, which can use AWS-supplied JDBC or ODBC drivers or open-source PostgreSQL drivers to connect with Amazon Redshift. These tools include TIBCO Spotfire Analytics, Tableau Server, Qlik Sense Enterprise, Looker, and others. Real-time data processing (step 3 preceding) combines with historical-view batch processing (step 4). Together, they empower contract research organizations (CROs), study managers, trial coordinators, and other entities involved in the clinical trial journey to make effective and informed decisions at a speed and frequency that was previously unavailable. Using Amazon QuickSight’s unique Pay-per-Session pricing model, you can optimize costs for your bursty usage models by paying only when users access the dashboards.

Using Amazon Simple Notification Service (Amazon SNS), real-time feedback based on incoming data and telemetry is sent to patients by using text messages, mobile push, and emails. In addition, study managers and coordinators can send Amazon SNS notifications to patients. Amazon SNS provides a fully managed pub/sub messaging for micro services, distributed systems, and serverless applications. It’s designed for high-throughput, push-based, many-to-many messaging. Alerts and notifications can be based on current data or a combination of current and historical data.

To encrypt messages published to Amazon SNS, you can follow the steps listed in the post Encrypting messages published to Amazon SNS with AWS KMS, on the AWS Compute Blog.   

Data security, data privacy, data integrity, and compliance considerations

At AWS, customer trust is our top priority. We deliver services to millions of active customers, including enterprises, educational institutions, and government agencies in over 190 countries. Our customers include financial services providers, healthcare providers, and governmental agencies, who trust us with some of their most sensitive information.

To facilitate this, along with the services mentioned earlier, you should also use AWS Identity and Access Management (IAM) service. IAM enables you to maintain segregation of access, fine-grained access control, and securing end user mobile and web applications. You can also use AWS Security Token Service (AWS STS) to provide secure, self-expiring, time-boxed, temporary security credentials to third-party administrators and service providers, greatly strengthening your security posture. You can use AWS CloudTrail to log IAM and STS API calls. Additionally, AWS IoT Device Management makes it easy to securely onboard, organize, monitor, and remotely manage IoT devices at scale.

With AWS, you can add an additional layer of security to your data at rest in the cloud. AWS provides scalable and efficient encryption features for services like Amazon EBS, Amazon S3, Amazon Redshift, Amazon SNSAWS Glue, and many more. Flexible key management options, including AWS Key Management Service, enable you to choose whether to have AWS manage the encryption keys or to keep complete control over their keys. In addition, AWS provides APIs for you to integrate encryption and data protection with any of the services that you develop or deploy in an AWS environment.

As a customer, you maintain ownership of your data, and select which AWS services can process, store, and host the content. Generally speaking, AWS doesn’t access or use customers’ content for any purpose without their consent. AWS never uses customer data to derive information for marketing or advertising.

When evaluating the security of a cloud solution, it’s important that you understand and distinguish between the security of the cloud and security in the cloud. The AWS Shared Responsibility Model details this relationship.

To assist you with your compliance efforts, AWS continues to add more services to the various compliance regulations, attestations, certifications, and programs across the world. To decide which services are suitable for you, see the services in scope page.

You can also use various services like, but not limited to, AWS CloudTrail, AWS Config, Amazon GuardDuty, and AWS Key Management Service (AWS KMS) to enhance your compliance and auditing efforts. Find more details in the AWS Compliance Solutions Guide.

Final thoughts

With the ever-growing interconnectivity and technological advances in the field of medical devices, mobile devices and sensors can improve numerous aspects of clinical trials. They can help in recruitment, informed consent, patient counseling, and patient communication management. They can also improve protocol and medication adherence, clinical endpoints measurement, and the process of alerting participants on adverse events. Smart sensors, smart mobile devices, and robust interconnecting systems can be central in conducting clinical trials.

Every biopharma organization conducting or sponsoring a clinical trial activity faces the conundrum of advancing their approach to trials while maintaining overall trial performance and data consistency. The AWS Cloud enables a new dimension for how data is collected, stored, and used for clinical trials. It thus addresses that conundrum as we march towards a new reality of how drugs are brought to market. The AWS Cloud abstracts away technical challenges such as scaling, security, and establishing a cost-efficient IT infrastructure. In doing so, it allows biopharma organizations to focus on their core mission of improving patent lives through the development of effective, groundbreaking treatments.

 


About the Author

Mayank Thakkar – Global Solutions Architect, AWS HealthCare and Life Sciences

 

 

 

 

Deven Atnoor, Ph.D. – Industry Specialist, AWS HealthCare and Life Sciences

 

 

 

 

Unhackable Cryptography?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/04/unhackable_cryp.html

A recent article overhyped the release of EverCrypt, a cryptography library created using formal methods to prove security against specific attacks.

The Quanta magazine article sets off a series of “snake-oil” alarm bells. The author’s Github README is more measured and accurate, and illustrates what a cool project this really is. But it’s not “hacker-proof cryptographic code.”

CAs Reissue Over One Million Weak Certificates

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/cas_reissue_ove.html

Turns out that the software a bunch of CAs used to generate public-key certificates was flawed: they created random serial numbers with only 63 bits instead of the required 64. That may not seem like a big deal to the layman, but that one bit change means that the serial numbers only have half the required entropy. This really isn’t a security problem; the serial numbers are to protect against attacks that involve weak hash functions, and we don’t allow those weak hash functions anymore. Still, it’s a good thing that the CAs are reissuing the certificates. The point of a standard is that it’s to be followed.

I Was Cited in a Court Decision

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/i_was_cited_in_.html

An article I co-wrote — my first law journal article — was cited by the Massachusetts Supreme Judicial Court — the state supreme court — in a case on compelled decryption.

Here’s the first, in footnote 1:

We understand the word “password” to be synonymous with other terms that cell phone users may be familiar with, such as Personal Identification Number or “passcode.” Each term refers to the personalized combination of letters or digits that, when manually entered by the user, “unlocks” a cell phone. For simplicity, we use “password” throughout. See generally, Kerr & Schneier, Encryption Workarounds, 106 Geo. L.J. 989, 990, 994, 998 (2018).

And here’s the second, in footnote 5:

We recognize that ordinary cell phone users are likely unfamiliar with the complexities of encryption technology. For instance, although entering a password “unlocks” a cell phone, the password itself is not the “encryption key” that decrypts the cell phone’s contents. See Kerr & Schneier, supra at 995. Rather, “entering the [password] decrypts the [encryption] key, enabling the key to be processed and unlocking the phone. This two-stage process is invisible to the casual user.” Id. Because the technical details of encryption technology do not play a role in our analysis, they are not worth belaboring. Accordingly, we treat the entry of a password as effectively decrypting the contents of a cell phone. For a more detailed discussion of encryption technology, see generally Kerr & Schneier, supra.

On the Security of Password Managers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/02/on_the_security_1.html

There’s new research on the security of password managers, specifically 1Password, Dashlane, KeePass, and Lastpass. This work specifically looks at password leakage on the host computer. That is, does the password manager accidentally leave plaintext copies of the password lying around memory?

All password managers we examined sufficiently secured user secrets while in a “not running” state. That is, if a password database were to be extracted from disk and if a strong master password was used, then brute forcing of a password manager would be computationally prohibitive.

Each password manager also attempted to scrub secrets from memory. But residual buffers remained that contained secrets, most likely due to memory leaks, lost memory references, or complex GUI frameworks which do not expose internal memory management mechanisms to sanitize secrets.

This was most evident in 1Password7 where secrets, including the master password and its associated secret key, were present in both a locked and unlocked state. This is in contrast to 1Password4, where at most, a single entry is exposed in a “running unlocked” state and the master password exists in memory in an obfuscated form, but is easily recoverable. If 1Password4 scrubbed the master password memory region upon successful unlocking, it would comply with all proposed security guarantees we outlined earlier.

This paper is not meant to criticize specific password manager implementations; however, it is to establish a reasonable minimum baseline which all password managers should comply with. It is evident that attempts are made to scrub and sensitive memory in all password managers. However, each password manager fails in implementing proper secrets sanitization for various reasons.

For example:

LastPass obfuscates the master password while users are typing in the entry, and when the password manager enters an unlocked state, database entries are only decrypted into memory when there is user interaction. However, ISE reported that these entries persist in memory after the software enters a locked state. It was also possible for the researchers to extract the master password and interacted-with password entries due to a memory leak.

KeePass scrubs the master password from memory and is not recoverable. However, errors in workflows permitted the researchers from extracting credential entries which have been interacted with. In the case of Windows APIs, sometimes, various memory buffers which contain decrypted entries may not be scrubbed correctly.

Whether this is a big deal or not depends on whether you consider your computer to be trusted.

Several people have emailed me to ask why my own Password Safe was not included in the evaluation, and whether it has the same vulnerabilities. My guess about the former is that Password Safe isn’t as popular as the others. (This is for two reasons: 1) I don’t publicize it very much, and 2) it doesn’t have an easy way to synchronize passwords across devices or otherwise store password data in the cloud.) As to the latter: we tried to code Password Safe not to leave plaintext passwords lying around in memory.

So, Independent Security Evaluators: take a look at Password Safe.

Also, remember the vulnerabilities found in many cloud-based password managers back in 2014?

News article. Slashdot thread.

Hacking the GCHQ Backdoor

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/hacking_the_gch.html

Last week, I evaluated the security of a recent GCHQ backdoor proposal for communications systems. Furthering the debate, Nate Cardozo and Seth Schoen of EFF explain how this sort of backdoor can be detected:

In fact, we think when the ghost feature is active­ — silently inserting a secret eavesdropping member into an otherwise end-to-end encrypted conversation in the manner described by the GCHQ authors­ — it could be detected (by the target as well as certain third parties) with at least four different techniques: binary reverse engineering, cryptographic side channels, network-traffic analysis, and crash log analysis. Further, crash log analysis could lead unrelated third parties to find evidence of the ghost in use, and it’s even possible that binary reverse engineering could lead researchers to find ways to disable the ghost capability on the client side. It should be obvious that none of these possibilities are desirable for law enforcement or society as a whole. And while we’ve theorized some types of mitigations that might make the ghost less detectable by particular techniques, they could also impose considerable costs to the network when deployed at the necessary scale, as well as creating new potential security risks or detection methods.

Other critiques of the system were written by Susan Landau and Matthew Green.

EDITED TO ADD (1/26): Good commentary on how to defeat the backdoor detection.

El Chapo’s Encryption Defeated by Turning His IT Consultant

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/el_chapos_encry.html

Impressive police work:

In a daring move that placed his life in danger, the I.T. consultant eventually gave the F.B.I. his system’s secret encryption keys in 2011 after he had moved the network’s servers from Canada to the Netherlands during what he told the cartel’s leaders was a routine upgrade.

A Dutch article says that it’s a BlackBerry system.

El Chapo had his IT person install “…spyware called FlexiSPY on the ‘special phones’ he had given to his wife, Emma Coronel Aispuro, as well as to two of his lovers, including one who was a former Mexican lawmaker.” That same software was used by the FBI when his IT person turned over the keys. Yet again we learn the lesson that a backdoor can be used against you.

And it doesn’t have to be with the IT person’s permission. A good intelligence agency can use the IT person’s authorizations without his knowledge or consent. This is why the NSA hunts sysadmins.

Slashdot thread. Hacker News thread. Boing Boing post.

Alex Stamos on Content Moderation and Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/alex_stamos_on_.html

Former Facebook CISO Alex Stamos argues that increasing political pressure on social media platforms to moderate content will give them a pretext to turn all end-to-end crypto off — which would be more profitable for them and bad for society.

If we ask tech companies to fix ancient societal ills that are now reflected online with moderation, then we will end up with huge, democratically-unaccountable organizations controlling our lives in ways we never intended. And those ills will still exist below the surface.

New Australian Backdoor Law

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/new_australian_.html

Last week, Australia passed a law giving the government the ability to demand backdoors in computers and communications systems. Details are still to be defined, but it’s really bad.

Note: Many people e-mailed me to ask why I haven’t blogged this yet. One, I was busy with other things. And two, there’s nothing I can say that I haven’t said many times before.

If there are more good links or commentary, please post them in the comments.

EDITED TO ADD (12/13): The Australian government response is kind of embarrassing.

Are KMS custom key stores right for you?

Post Syndicated from Richard Moulds original https://aws.amazon.com/blogs/security/are-kms-custom-key-stores-right-for-you/

You can use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. The KMS custom key store integrates KMS with AWS CloudHSM to help satisfy compliance obligations that would otherwise require the use of on-premises hardware security modules (HSMs) while providing the AWS service integrations of KMS. However, the additional control comes with increased cost and potential impact on performance and availability. This post will help you decide if this feature is the best approach for you.

KMS is a fully managed service that generates encryption keys and helps you manage their use across more than 45 AWS services. It also supports the AWS Encryption SDK and other client-side encryption tools, and you can integrate it into your own applications. KMS is designed to meet the requirements of the vast majority of AWS customers. However, there are situations where customers need to manage their keys in single-tenant HSMs that they exclusively control. Previously, KMS did not meet these requirements since it offered only the ability to store keys in shared HSMs that are managed by KMS.

AWS CloudHSM is a service that’s primarily intended to support customer-managed applications that are specifically designed to use HSMs. It provides direct control over HSM resources, but the service isn’t, by itself, widely integrated with other AWS managed services. Before custom key store, this meant that if you required direct control of your HSMs but still wanted to use and store regulated data in AWS managed services, you had to choose between changing those requirements, not using a given AWS service, or building your own solution. KMS custom key store gives you another option.

How does a custom key store work?

With custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys rather than the default KMS key store. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Your KMS customer master keys (CMKs) never leave the CloudHSM instances, and all KMS operations that use those keys are only performed in your HSMs. In all other respects, the master keys stored in your custom key store are used in a way that is consistent with other KMS CMKs.

This diagram illustrates the primary components of the service and shows how a cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store.
 

Figure 1: A cluster of two CloudHSM instances is connected to the KMS front-end hosts to create a customer controlled key store

Figure 1: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Because you control your CloudHSM cluster, you can take direct action to manage certain aspects of the lifecycle of your keys, independently of KMS. Specifically, you can verify that KMS correctly created keys in your HSMs and you can delete key material and restore keys from backup at any time. You can also choose to connect and disconnect the CloudHSM cluster from KMS, effectively isolating your keys from KMS. However, with more control comes more responsibility. It’s important that you understand the availability and durability impact of using this feature, and I discuss the issues in the next section.

Decision criteria

KMS customers who plan to use a custom key store tell us they expect to use the feature selectively, deciding on a key-by-key basis where to store them. To help you decide if and how you might use the new feature, here are some important issues to consider.

Here are some reasons you might want to store a key in a custom key store:

  • You have keys that are required to be protected in a single-tenant HSM or in an HSM over which you have direct control.
  • You have keys that are explicitly required to be stored in an HSM validated at FIPS 140-2 level 3 overall (the HSMs used in the default KMS key store are validated to level 2 overall, with level 3 in several categories, including physical security).
  • You have keys that are required to be auditable independently of KMS.

And here are some considerations that might influence your decision to use a custom key store:

  • Cost — Each custom key store requires that your CloudHSM cluster contains at least two HSMs. CloudHSM charges vary by region, but you should expect costs of at least $1,000 per month, per HSM, if each device is permanently provisioned. This cost occurs regardless of whether you make any requests of the KMS API directly or indirectly through an AWS service.
  • Performance — The number of HSMs determines the rate at which keys can be used. It’s important that you understand the intended usage patterns for your keys and ensure that you have provisioned your HSM resources appropriately.
  • Availability — The number of HSMs and the use of availability zones (AZs) impacts the availability of your cluster and, therefore, your keys. The risk of your configuration errors that result in a custom key store being disconnected, or key material being deleted and unrecoverable, must be understood and assessed.
  • Operations — By using the custom key store feature, you will perform certain tasks that are normally handled by KMS. You will need to set up HSM clusters, configure HSM users, and potentially restore HSMs from backup. These are security-sensitive tasks for which you should have the appropriate resources and organizational controls in place to perform.

Getting Started

Here’s a basic rundown of the steps that you’ll take to create your first key in a custom key store within a given region.

  1. Create your CloudHSM cluster, initialize it, and add HSMs to the cluster. If you already have a CloudHSM cluster, you can use it as a custom key store in addition to your existing applications.
  2. Create a CloudHSM user so that KMS can access your cluster to create and use keys.
  3. Create a custom key store entry in KMS, give it a name, define which CloudHSM cluster you want it to use, and give KMS the credentials to access your cluster.
  4. Instruct KMS to make a connection to your cluster and log in.
  5. Create a CMK in KMS in the usual way except now select CloudHSM as the source of your key material. You’ll define administrators, users, and policies for the key as you would for any other CMK.
  6. Use the key via the existing KMS APIs, AWS CLI, or the AWS Encryption SDK. Requests to use the key don’t need to be context-aware of whether the key is stored in a custom key store or the default KMS key store.

Summary

Some customers need specific controls in place before they can use KMS to manage encryption keys in AWS. The new KMS custom key store feature is intended to satisfy that requirement. You can now apply the controls provided by CloudHSM to keys managed in KMS, without changing access control policies or service integration.

However, by using the new feature, you take responsibility for certain operational aspects that would otherwise be handled by KMS. It’s important that you have the appropriate controls in place and understand the performance and availability requirements of each key that you create in a custom key store.

If you’ve been prevented from migrating sensitive data to AWS because of specific key management requirements that are currently not met by KMS, consider using the new KMS custom key store feature.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Key Management Service discussion forum.

Want more AWS Security news? Follow us on Twitter.

Author

Richard Moulds

Richard is a Principal Product Manager at AWS. He’s a member of the KMS team and is primarily focused on helping to define the product roadmap and satisfy customer requirements. Richard has more than 15 years experience in helping customers build encryption and key management systems to protect their data. His attraction to cryptography stems from the challenge of taking such a complex subject and translating it into simple solutions that customers should be able to take for granted, on a grand scale. When he’s not thinking ahead he’s focused on the past, restoring classic cars, the more rust the better.

Security of Solid-State-Drive Encryption

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/security_of_sol.html

Interesting research: “Self-encrypting deception: weaknesses in the encryption of solid state drives (SSDs)“:

Abstract: We have analyzed the hardware full-disk encryption of several SSDs by reverse engineering their firmware. In theory, the security guarantees offered by hardware encryption are similar to or better than software implementations. In reality, we found that many hardware implementations have critical security weaknesses, for many models allowing for complete recovery of the data without knowledge of any secret. BitLocker, the encryption software built into Microsoft Windows will rely exclusively on hardware full-disk encryption if the SSD advertises supported for it. Thus, for these drives, data protected by BitLocker is also compromised. This challenges the view that hardware encryption is preferable over software encryption. We conclude that one should not rely solely on hardware encryption offered by SSDs.

EDITED TO ADD: The NSA is known to attack firmware of SSDs.

EDITED TO ADD (11/13): CERT advisory. And older research.

Security Vulnerabilities in US Weapons Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/security_vulner_17.html

The US Government Accounting Office just published a new report: “Weapons Systems Cyber Security: DOD Just Beginning to Grapple with Scale of Vulnerabilities” (summary here). The upshot won’t be a surprise to any of my regular readers: they’re vulnerable.

From the summary:

Automation and connectivity are fundamental enablers of DOD’s modern military capabilities. However, they make weapon systems more vulnerable to cyber attacks. Although GAO and others have warned of cyber risks for decades, until recently, DOD did not prioritize weapon systems cybersecurity. Finally, DOD is still determining how best to address weapon systems cybersecurity.

In operational testing, DOD routinely found mission-critical cyber vulnerabilities in systems that were under development, yet program officials GAO met with believed their systems were secure and discounted some test results as unrealistic. Using relatively simple tools and techniques, testers were able to take control of systems and largely operate undetected, due in part to basic issues such as poor password management and unencrypted communications. In addition, vulnerabilities that DOD is aware of likely represent a fraction of total vulnerabilities due to testing limitations. For example, not all programs have been tested and tests do not reflect the full range of threats.

It is definitely easier, and cheaper, to ignore the problem or pretend it isn’t a big deal. But that’s probably a mistake in the long run.

More on the Five Eyes Statement on Encryption and Backdoors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/more_on_the_fiv.html

Earlier this month, I wrote about a statement by the Five Eyes countries about encryption and back doors. (Short summary: they like them.) One of the weird things about the statement is that it was clearly written from a law-enforcement perspective, though we normally think of the Five Eyes as a consortium of intelligence agencies.

Susan Landau examines the details of the statement, explains what’s going on, and why the statement is a lot less than what it might seem.

Quantum Computing and Cryptography

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/09/quantum_computi_2.html

Quantum computing is a new way of computing — one that could allow humankind to perform computations that are simply impossible using today’s computing technologies. It allows for very fast searching, something that would break some of the encryption algorithms we use today. And it allows us to easily factor large numbers, something that would break the RSA cryptosystem for any key length.

This is why cryptographers are hard at work designing and analyzing “quantum-resistant” public-key algorithms. Currently, quantum computing is too nascent for cryptographers to be sure of what is secure and what isn’t. But even assuming aliens have developed the technology to its full potential, quantum computing doesn’t spell the end of the world for cryptography. Symmetric cryptography is easy to make quantum-resistant, and we’re working on quantum-resistant public-key algorithms. If public-key cryptography ends up being a temporary anomaly based on our mathematical knowledge and computational ability, we’ll still survive. And if some inconceivable alien technology can break all of cryptography, we still can have secrecy based on information theory — albeit with significant loss of capability.

At its core, cryptography relies on the mathematical quirk that some things are easier to do than to undo. Just as it’s easier to smash a plate than to glue all the pieces back together, it’s much easier to multiply two prime numbers together to obtain one large number than it is to factor that large number back into two prime numbers. Asymmetries of this kind — one-way functions and trap-door one-way functions — underlie all of cryptography.

To encrypt a message, we combine it with a key to form ciphertext. Without the key, reversing the process is more difficult. Not just a little more difficult, but astronomically more difficult. Modern encryption algorithms are so fast that they can secure your entire hard drive without any noticeable slowdown, but that encryption can’t be broken before the heat death of the universe.

With symmetric cryptography — the kind used to encrypt messages, files, and drives — that imbalance is exponential, and is amplified as the keys get larger. Adding one bit of key increases the complexity of encryption by less than a percent (I’m hand-waving here) but doubles the cost to break. So a 256-bit key might seem only twice as complex as a 128-bit key, but (with our current knowledge of mathematics) it’s 340,282,366,920,938,463,463,374,607,431,768,211,456 times harder to break.

Public-key encryption (used primarily for key exchange) and digital signatures are more complicated. Because they rely on hard mathematical problems like factoring, there are more potential tricks to reverse them. So you’ll see key lengths of 2,048 bits for RSA, and 384 bits for algorithms based on elliptic curves. Here again, though, the costs to reverse the algorithms with these key lengths are beyond the current reach of humankind.

This one-wayness is based on our mathematical knowledge. When you hear about a cryptographer “breaking” an algorithm, what happened is that they’ve found a new trick that makes reversing easier. Cryptographers discover new tricks all the time, which is why we tend to use key lengths that are longer than strictly necessary. This is true for both symmetric and public-key algorithms; we’re trying to future-proof them.

Quantum computers promise to upend a lot of this. Because of the way they work, they excel at the sorts of computations necessary to reverse these one-way functions. For symmetric cryptography, this isn’t too bad. Grover’s algorithm shows that a quantum computer speeds up these attacks to effectively halve the key length. This would mean that a 256-bit key is as strong against a quantum computer as a 128-bit key is against a conventional computer; both are secure for the foreseeable future.

For public-key cryptography, the results are more dire. Shor’s algorithm can easily break all of the commonly used public-key algorithms based on both factoring and the discrete logarithm problem. Doubling the key length increases the difficulty to break by a factor of eight. That’s not enough of a sustainable edge.

There are a lot of caveats to those two paragraphs, the biggest of which is that quantum computers capable of doing anything like this don’t currently exist, and no one knows when — or even if ­- we’ll be able to build one. We also don’t know what sorts of practical difficulties will arise when we try to implement Grover’s or Shor’s algorithms for anything but toy key sizes. (Error correction on a quantum computer could easily be an unsurmountable problem.) On the other hand, we don’t know what other techniques will be discovered once people start working with actual quantum computers. My bet is that we will overcome the engineering challenges, and that there will be many advances and new techniques­but they’re going to take time to discover and invent. Just as it took decades for us to get supercomputers in our pockets, it will take decades to work through all the engineering problems necessary to build large-enough quantum computers.

In the short term, cryptographers are putting considerable effort into designing and analyzing quantum-resistant algorithms, and those are likely to remain secure for decades. This is a necessarily slow process, as both good cryptanalysis transitioning standards take time. Luckily, we have time. Practical quantum computing seems to always remain “ten years in the future,” which means no one has any idea.

After that, though, there is always the possibility that those algorithms will fall to aliens with better quantum techniques. I am less worried about symmetric cryptography, where Grover’s algorithm is basically an upper limit on quantum improvements, than I am about public-key algorithms based on number theory, which feel more fragile. It’s possible that quantum computers will someday break all of them, even those that today are quantum resistant.

If that happens, we will face a world without strong public-key cryptography. That would be a huge blow to security and would break a lot of stuff we currently do, but we could adapt. In the 1980s, Kerberos was an all-symmetric authentication and encryption system. More recently, the GSM cellular standard does both authentication and key distribution — at scale — with only symmetric cryptography. Yes, those systems have centralized points of trust and failure, but it’s possible to design other systems that use both secret splitting and secret sharing to minimize that risk. (Imagine that a pair of communicants get a piece of their session key from each of five different key servers.) The ubiquity of communications also makes things easier today. We can use out-of-band protocols where, for example, your phone helps you create a key for your computer. We can use in-person registration for added security, maybe at the store where you buy your smartphone or initialize your Internet service. Advances in hardware may also help to secure keys in this world. I’m not trying to design anything here, only to point out that there are many design possibilities. We know that cryptography is all about trust, and we have a lot more techniques to manage trust than we did in the early years of the Internet. Some important properties like forward secrecy will be blunted and far more complex, but as long as symmetric cryptography still works, we’ll still have security.

It’s a weird future. Maybe the whole idea of number theory­-based encryption, which is what our modern public-key systems are, is a temporary detour based on our incomplete model of computing. Now that our model has expanded to include quantum computing, we might end up back to where we were in the late 1970s and early 1980s: symmetric cryptography, code-based cryptography, Merkle hash signatures. That would be both amusing and ironic.

Yes, I know that quantum key distribution is a potential replacement for public-key cryptography. But come on — does anyone expect a system that requires specialized communications hardware and cables to be useful for anything but niche applications? The future is mobile, always-on, embedded computing devices. Any security for those will necessarily be software only.

There’s one more future scenario to consider, one that doesn’t require a quantum computer. While there are several mathematical theories that underpin the one-wayness we use in cryptography, proving the validity of those theories is in fact one of the great open problems in computer science. Just as it is possible for a smart cryptographer to find a new trick that makes it easier to break a particular algorithm, we might imagine aliens with sufficient mathematical theory to break all encryption algorithms. To us, today, this is ridiculous. Public- key cryptography is all number theory, and potentially vulnerable to more mathematically inclined aliens. Symmetric cryptography is so much nonlinear muddle, so easy to make more complex, and so easy to increase key length, that this future is unimaginable. Consider an AES variant with a 512-bit block and key size, and 128 rounds. Unless mathematics is fundamentally different than our current understanding, that’ll be secure until computers are made of something other than matter and occupy something other than space.

But if the unimaginable happens, that would leave us with cryptography based solely on information theory: one-time pads and their variants. This would be a huge blow to security. One-time pads might be theoretically secure, but in practical terms they are unusable for anything other than specialized niche applications. Today, only crackpots try to build general-use systems based on one-time pads — and cryptographers laugh at them, because they replace algorithm design problems (easy) with key management and physical security problems (much, much harder). In our alien-ridden science-fiction future, we might have nothing else.

Against these godlike aliens, cryptography will be the only technology we can be sure of. Our nukes might refuse to detonate and our fighter jets might fall out of the sky, but we will still be able to communicate securely using one-time pads. There’s an optimism in that.

This essay originally appeared in IEEE Security and Privacy.

GCHQ on Quantum Key Distribution

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/08/gchq_on_quantum.html

The UK’s GCHQ delivers a brutally blunt assessment of quantum key distribution:

QKD protocols address only the problem of agreeing keys for encrypting data. Ubiquitous on-demand modern services (such as verifying identities and data integrity, establishing network sessions, providing access control, and automatic software updates) rely more on authentication and integrity mechanisms — such as digital signatures — than on encryption.

QKD technology cannot replace the flexible authentication mechanisms provided by contemporary public key signatures. QKD also seems unsuitable for some of the grand future challenges such as securing the Internet of Things (IoT), big data, social media, or cloud applications.

I agree with them. It’s a clever idea, but basically useless in practice. I don’t even think it’s anything more than a niche solution in a world where quantum computers have broken our traditional public-key algorithms.

Read the whole thing. It’s short.

Amazon ElastiCache for Redis now PCI DSS compliant, allowing you to process sensitive payment card data in-memory for faster performance

Post Syndicated from Manan Goel original https://aws.amazon.com/blogs/security/amazon-elasticache-redis-now-pci-dss-compliant-payment-card-data-in-memory/

Amazon ElastiCache for Redis has achieved the Payment Card Industry Data Security Standard (PCI DSS). This means that you can now use ElastiCache for Redis for low-latency and high-throughput in-memory processing of sensitive payment card data, such as Customer Cardholder Data (CHD). ElastiCache for Redis is a Redis-compatible, fully-managed, in-memory data store and caching service in the cloud. It delivers sub-millisecond response times with millions of requests per second.

To create a PCI-Compliant ElastiCache for Redis cluster, you must use the latest Redis engine version 4.0.10 or higher and current generation node types. The service offers various data security controls to store, process, and transmit sensitive financial data. These controls include in-transit encryption (TLS), at-rest encryption, and Redis AUTH. There’s no additional charge for PCI DSS compliant ElastiCache for Redis.

In addition to PCI, ElastiCache for Redis is a HIPAA eligible service. If you want to use your existing Redis clusters that process healthcare information to also process financial information while meeting PCI requirements, you must upgrade your Redis clusters from 3.2.6 to 4.0.10. For more details, see Upgrading Engine Versions and ElastiCache for Redis Compliance.

Meeting these high bars for security and compliance means ElastiCache for Redis can be used for secure database and application caching, session management, queues, chat/messaging, and streaming analytics in industries as diverse as financial services, gaming, retail, e-commerce, and healthcare. For example, you can use ElastiCache for Redis to build an internet-scale, ride-hailing application and add digital wallets that store customer payment card numbers, thus enabling people to perform financial transactions securely and at industry standards.

To get started, see ElastiCache for Redis Compliance Documentation.

Want more AWS Security news? Follow us on Twitter.