Tag Archives: rsa

Security Is Shifting in a Cloud-Native World: Insights From RSAC 2022

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/06/16/security-is-shifting-in-a-cloud-native-world-insights-from-rsac-2022/

Security Is Shifting in a Cloud-Native World: Insights From RSAC 2022

The cloud has become the default for IT infrastructure and resource delivery, allowing an unprecedented level of speed and flexibility for development and production pipelines. This helps organizations compete and innovate in a fast-paced business environment. But as the cloud becomes more ingrained, the ephemeral nature of cloud infrastructure is presenting new challenges for security teams.

Several talks by our Rapid7 presenters at this year’s RSA Conference touched on this theme. Here’s a closer look at what our RSAC 2022 presenters had to say about adapting security processes to a cloud-native world.

A complex picture

As Lee Weiner, SVP Cloud Security and Chief Innovation Officer, pointed out in his RSA briefing, “Context Is King: The Future of Cloud Security,” cloud adoption is not only increasing — it’s growing more complex. Many organizations are bringing on multiple cloud vendors to meet a variety of different needs. One report estimates that a whopping 89% of companies that have adopted the cloud have chosen a multicloud approach.

This model is so popular because of the flexibility it offers organizations to utilize the right technology, in the right cloud environment, at the right cost — a key advantage in a today’s marketplace.

“Over the last decade or so, many organizations have been going through a transformation to put themselves in a position to use the scale and speed of the cloud as a strategic business advantage,” Jane Man, Director of Product Management for VRM, said in her RSA Lounge presentation, “Adapting Your Vulnerability Management Program for Cloud-Native Environments.”

While DevOps teams can move more quickly than ever before with this model, security pros face a more complex set of questions than with traditional infrastructure, Lee noted. How many of our instances are exposed to known vulnerabilities? Do they have property identity and access management (IAM) controls established? What levels of access do those permissions actually grant users in our key applications?

New infrastructure, new demands

The core components of vulnerability management remain the same in cloud environments, Jane said in her talk. Security teams must:

  • Get visibility into all assets, resources, and services
  • Assess, prioritize, and remediate risks
  • Communicate the organization’s security and compliance posture to management

But because of the ephemeral nature of the cloud, the way teams go about completing these requirements is shifting.

“Running a scheduled scan, waiting for it to complete and then handing a report to IT doesn’t work when instances may be spinning up and down on a daily or hourly basis,” she said.

In his presentation, Lee expressed optimism that the cloud itself may help provide the new methods we need for cloud-native security.

“Because of the way cloud infrastructure is built and deployed, there’s a real opportunity to answer these questions far faster, far more efficiently, far more effectively than we could with traditional infrastructure,” he said.

Calling for context

For Lee, the goal is to enable secure adoption of cloud technologies so companies can accelerate and innovate at scale. But there’s a key element needed to achieve this vision: context.

What often prevents teams from fully understanding the context around their security data is the fact that it is siloed, and the lack of integration between disparate systems requires a high level of manual effort to put the pieces together. To really get a clear picture of risk, security teams need to be able to bring their data together with context from each layer of the environment.

But what does context actually look like in practice, and how do you achieve it? Jane laid out a few key strategies for understanding the context around security data in your cloud environment.

  • Broaden your scope: Set up your VM processes so that you can detect more than just vulnerabilities in the cloud — you want to be able to see misconfigurations and issues with IAM permissions, too.
  • Understand the environment: When you identify a vulnerable instance, identify if it is publicly accessible and what its business application is — this will help you determine the scope of the vulnerability.
  • Catch early: Aim to find and fix vulnerabilities in production or pre-production by shifting security left, earlier in the development cycle.

4 best practices for context-driven cloud security

Once you’re able to better understand the context around security data in your environment, how do you fit those insights into a holistic cloud security strategy? For Lee, this comes down to four key components that make up the framework for cloud-native security.

1. Visibility and findings

You can’t secure what you can’t see — so the first step in this process is to take a full inventory of your attack surface. With different kinds of cloud resources in place and providers releasing new services frequently, understanding the security posture of these pieces of your infrastructure is critical. This includes understanding not just vulnerabilities and misconfigurations but also access, permissions, and identities.

“Understanding the layer from the infrastructure to the workload to the identity can provide a lot of confidence,” Lee said.

2. Contextual prioritization

Not everything you discover in this inventory will be of equal importance, and treating it all the same way just isn’t practical or feasible. The vast amount of data that companies collect today can easily overwhelm security analysts — and this is where context really comes in.

With integrated visibility across your cloud infrastructure, you can make smarter decisions about what risks to prioritize. Then, you can assign ownership to resource owners and help them understand how those priorities were identified, improving transparency and promoting trust.

3. Prevent and automate

The cloud is built with automation in mind through Infrastructure as Code — and this plays a key role in security. Automation can help boost efficiency by minimizing the time it takes to detect, remediate, or contain threats. A shift-left strategy can also help with prevention by building security into deployment pipelines, so production teams can identify vulnerabilities earlier.

Jane echoed this sentiment in her talk, recommending that companies “automate to enable — but not force — remediation” and use tagging to drive remediation of vulnerabilities found running in production.

4. Runtime monitoring

The next step is to continually monitor the environment for vulnerabilities and threat activity — and as you might have guessed, monitoring looks a little different in the cloud. For Lee, it’s about leveraging the increased number of signals to understand if there’s any drift away from the way the service was originally configured.

He also recommended using behavioral analysis to detect threat activity and setting up purpose-built detections that are specific to cloud infrastructure. This will help ensure the security operations center (SOC) has the most relevant information possible, so they can perform more effective investigations.

Lee stressed that in order to carry out the core components of cloud security and achieve the outcomes companies are looking for, having an integrated ecosystem is absolutely essential. This will help prevent data from becoming siloed, enable security pros to obtain that ever-important context around their data, and let teams collaborate with less friction.

Looking for more insights on how to adapt your security program to a cloud-native world? Check out Lee’s presentation on demand, or watch our replays of Rapid7 speakers’ sessions from RSAC 2022.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Defending Against Tomorrow’s Threats: Insights From RSAC 2022

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/06/13/defending-against-tomorrows-threats-insights-from-rsac-2022/

Defending Against Tomorrow's Threats: Insights From RSAC 2022

The rapidly changing pace of the cyberthreat landscape is on every security pro’s mind. Not only do organizations need to secure complex cloud environments, they’re also more aware than ever that their software supply chains and open-source elements of their application codebase might not be as ironclad as they thought.

It should come as no surprise, then, that defending against a new slate of emerging threats was a major theme at RSAC 2022. Here’s a closer look at what some Rapid7 experts who presented at this year’s RSA conference in San Francisco had to say about staying ahead of attackers in the months to come.

Surveying the threat landscape

Security practitioners often turn to Twitter for the latest news and insights from peers. As Raj Samani, SVP and Chief Data Scientist, and Lead Security Researcher Spencer McIntyre pointed out in their RSA talk, “Into the Wild: Exploring Today’s Top Threats,” the trend holds true when it comes to emerging threats.

“For many people, identifying threats is actually done through somebody that I follow on Twitter posting details about a particular vulnerability,” said Raj.

As Spencer noted, security teams need to be able to filter all these inputs and identify the actual priorities that require immediate patching and remediation. And that’s where the difficulty comes in.

“How do you manage a patching strategy when there are critical vulnerabilities coming out … it seems weekly?” Raj asked. “Criminals are exploiting these vulnerabilities literally in days, if that,” he continued.

Indeed, the average time to exploit — i.e., the interval between a vulnerability being discovered by researchers and clear evidence of attackers using it in the wild — plummeted from 42 days in 2020 to 17 days in 2021, as noted in Rapid7’s latest Vulnerability Intelligence Report. With so many threats emerging at a rapid clip and so little time to react, defenders need the tools and expertise to understand which vulnerabilities to prioritize and how attackers are exploiting them.

“Unless we get a degree of context and an understanding of what’s happening, we’re going to end up ignoring many of these vulnerabilities because we’ve just got other things to worry about,” said Raj.

The evolving threat of ransomware

One of the things that worry security analysts, of course, is ransomware — and as the threat has grown in size and scope, the ransomware market itself has changed. Cybercriminals are leveraging this attack vector in new ways, and defenders need to adapt their strategies accordingly.

That was the theme that Erick Galinkin, Principal AI Researcher, covered in his RSA talk, “How to Pivot Fast and Defend Against Ransomware.” Erick identified four emerging ransomware trends that defenders need to be aware of:

  • Double extortion: In this type of attack, threat actors not only demand a ransom for the data they’ve stolen and encrypted but also extort organizations for a second time — pay an additional fee, or they’ll leak the data. This means that even if you have backups of your data, you’re still at risk from this secondary ransomware tactic.
  • Ransomware as a service (RaaS): Not all threat actors know how to write highly effective ransomware. With RaaS, they can simply purchase malicious software from a provider, who takes a cut of the payout. The result is a broader and more decentralized network of ransomware attackers.
  • Access brokers: A kind of mirror image to RaaS, access brokers give a leg up to bad actors who want to run ransomware on an organization’s systems but need an initial point of entry. Now, that access is for sale in the form of phished credentials, cracked passwords, or leaked data.
  • Lateral movement: Once a ransomware attacker has infiltrated an organization’s network, they can use lateral movement techniques to gain a higher level of access and ransom the most sensitive, high-value data they can find.

With the ransomware threat growing by the day and attackers’ techniques growing more sophisticated, security pros need to adapt to the new landscape. Here are a few of the strategies Erick recommended for defending against these new ransomware tactics.

  • Continue to back up all your data, and protect the most sensitive data with strong admin controls.
  • Don’t get complacent about credential theft — the spoils of a might-be phishing attack could be sold by an access broker as an entry point for ransomware.
  • Implement the principle of least privilege, so only administrator accounts can perform administrator functions — this will help make lateral movement easier to detect.

Shaping a new kind of SOC

With so much changing in the threat landscape, how should the security operations center (SOC) respond?

This was the focus of “Future Proofing the SOC: A CISO’s Perspective,” the RSA talk from Jeffrey Gardner, Practice Advisor for Detection and Response (D&R). In addition to the sprawling attack surface, security analysts are also experiencing a high degree of burnout, understandably overwhelmed by the sheer volume of alerts and threats. To alleviate some of the pressure, SOC teams need a few key things:

For Jeffrey, these needs are best met through a hybrid SOC model — one that combines internally owned SOC resources and staff with external capabilities offered through a provider, for a best-of-both-worlds approach. The framework for this approach is already in place, but the version that Jeffrey and others at Rapid7 envision involves some shifting of paradigms. These include:

  • Collapsing the distinction between product and service and moving toward “everything as a service,” with a unified platform that allows resources — which includes everything from in-product features to provider expertise and guidance — to be delivered at a sliding scale
  • Ensuring full transparency, so the organization understands not only what’s going on in their own SOC but also in their provider’s, through the use of shared solutions
  • More customization, with workflows, escalations, and deliverables tailored to the customer’s needs

Meeting the moment

It’s critical to stay up to date with the most current vulnerabilities we’re seeing and the ways attackers are exploiting them — but to be truly valuable, those insights must translate into action. Defenders need strategies tailored to the realities of today’s threat landscape.

For our RSA 2022 presenters, that might mean going back to basics with consistent data backups and strong admin controls. Or it might mean going bold by fully reimagining the modern SOC. The techniques don’t have to be new or fancy or to be effective — they simply have to meet the moment. (Although if the right tactics turn out to be big and game-changing, we’ll be as excited as the next security pro.)

Looking for more insights on how defenders can protect their organizations amid today’s highly dynamic threat landscape? You can watch these presentations — and even more from our Rapid7 speakers — at our library of replays from RSAC 2022.

Additional reading

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/06/10/video-an-inside-look-at-the-rsa-2022-experience-from-the-rapid7-team/

[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

The two years since the last RSA Conference have been pretty uneventful. Sure, COVID-19 sent us all to work from home for a little while, but it’s not as though we’ve seen any supply-chain-shattering breaches, headline-grabbing ransomware attacks, internet-inferno vulnerabilities, or anything like that. We’ve mostly just been baking sourdough bread and doing woodworking in between Zoom meetings.

OK, just kidding on basically all of that (although I, for one, have continued to hone my sourdough game). ​

The reality has been quite the opposite. Whether it’s because an unprecedented number of crazy things have happened since March 2020 or because pandemic-era uncertainty has made all of our experiences feel a little more heightened, the past 24 months have been a lot. And now that restrictions on gatherings are largely lifted in most places, many of us are feeling like we need a chance to get together and debrief on what we’ve all been through.

Given that context, what better timing could there have been for RSAC 2022? This past week, a crew of Rapid7 team members gathered in San Francisco to sync up with the greater cybersecurity community and take stock of how we can all stay ahead of attackers and ready for the future in the months to come. We asked four of them — Jeffrey Gardner, Practice Advisor – Detection & Response; Tod Beardsley, Director of Research; Kelly Allen, Social Media Manager; and Erick Galinkin, Principal Artificial Intelligence Researcher — to tell us a little bit about their RSAC 2022 experience. Here’s a look at what they had to say — and a glimpse into the excitement and energy of this year’s RSA Conference.

What’s it been like returning to full-scale in-person events after 2 years?



[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

What was your favorite session or speaker of the week? What made them stand out?



[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

What was your biggest takeaway from the conference? How will it shape the way you think about and practice cybersecurity in the months to come?



[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

Want to relive the RSA experience for yourself? Check out our replays of Rapid7 speakers’ sessions from the week.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

How to use AWS KMS RSA keys for offline encryption

Post Syndicated from Patrick Palmer original https://aws.amazon.com/blogs/security/how-to-use-aws-kms-rsa-keys-for-offline-encryption/

This blog post discusses how you can use AWS Key Management Service (AWS KMS) RSA public keys on end clients or devices and encrypt data, then subsequently decrypt data by using private keys that are secured in AWS KMS.

Asymmetric cryptography is a cryptographic system that uses key pairs. Each pair consists of a public key, which can be seen or accessed by anyone, and a private key, which can be accessed only by authorized people. This system has a useful property, which is that anything encrypted with a public key can only be decrypted by the corresponding private key. A popular method for generating key pairs and encrypting data is the RSA algorithm and cryptosystem.

For RSA key pairs, calculating the private key from the public key is seen as computationally infeasible, and therefore RSA key pairs can be used for both authentication and encryption. The features of asymmetric encryption allow separated parties to share information across an untrusted domain, such as the internet, without having to pre-share any other secrets. However, this type of encryption poses an issue of keeping the private key secure, because the private key has the power to decrypt all messages that are transmitted by a large number of end users.

AWS KMS provides simple APIs that you can use to securely generate, store, and manage keys, including RSA key pairs inside hardware security modules (HSMs). Key pairs are generated within FIPS 140-2 validated HSMs that are managed by AWS. You can then use these private keys through APIs to do actions such as decrypt ciphertexts, meaning that plaintext private keys never leave the HSM, which provides assurances of privacy for the private key. Additional APIs allow a customer to retrieve a plaintext copy of the corresponding public key, which allows disconnected or offline uses of RSA public keys.

Limits of asymmetric cryptography

A key drawback to asymmetric cryptography is the fact that you cannot encrypt large pieces of data. When you have a 2048-bit RSA key pair and encrypt something by using the cipher RSAES_OASEP_SHA_256, the largest amount of data that you can encrypt is 190 bytes.

In contrast, symmetric encryption ciphers that use a chained or counter-mode operation don’t have this limit, and they make it possible for you to encrypt data in the tens-of-gigabytes. Symmetric encryption algorithms such as the Advanced Encryption Standard (AES) also benefit from faster data encryption speeds due to smaller key sizes and less complex operations that can be built into hardware.

By combining these two algorithms in a hybrid cryptosystem, you give end clients with a public key the ability to encrypt large pieces of information. A client generates a random 256-bit AES key, which should be from a secure source such as /dev/urandom or a dedicated embedded chip. The client then encrypts its large payload by using a mode of operation such as AES-GCM or AES-CBC by using that 256-bit AES key. Next, the client encrypts that 256-bit AES key by using the RSA public key (see step 5 in Figure 1). End clients then transmit only encrypted data across insecure channels, maintaining privacy of the payload data.

A challenge that customers often face is that they want to use AWS KMS for its security properties, but also want to access their KMS keys from devices that don’t have AWS credentials embedded within them. Without AWS credentials, a device can’t call AWS APIs. This blog post shows how you can use a hybrid cryptosystem where RSA public keys can be downloaded or embedded into devices to overcome this challenge.

Prerequisites and initial considerations

This walkthrough assumes that you have some understanding of RSA ciphers and symmetric encryption schemes such as AES. The walkthrough uses OpenSSL for demonstration of the encryption process, but similar libraries can be used on a client-side device.

The walkthrough also assumes that you have an AWS Identity and Access Management (IAM) user with permissions to the AWS KMS service, and the AWS Command Line Interface (AWS CLI) installed with the relevant credentials.

When you create a KMS key, you will also generate a key policy that defines access to it. The default key policy allows all users in your account with AWS KMS actions in their IAM policies to access the KMS key. The key policy for a given KMS key is the primary method for determining access.

Important: You will incur charges for the services used in this example. You can find the cost of each service on the corresponding service pricing page. For more information, see AWS KMS Pricing.

Architectural overview

This post contains procedures for completing the following operations, which are also shown in Figure 1:

  1. Create an RSA key pair in AWS KMS.
  2. Download or pre-install the AWS KMS public key to an end-client device.
  3. Generate an AES 256-bit key on an end client.
  4. Encrypt a large payload of data on the end client by using the AES 256-bit key.
  5. Encrypt the AES 256-bit key with the AWS KMS public key.
  6. Transfer the encrypted payload and key.
  7. Decrypt the AES 256-bit key by using AWS KMS.
  8. Decrypt the payload data by using the now-shared AES 256-bit key.
Figure 1: The steps for hybrid encryption

Figure 1: The steps for hybrid encryption

This diagram shows an end client device, an untrusted network such as a cellular network, and the AWS Cloud. An RSA key pair is generated in AWS KMS, and then the public key can either be embedded in the end client, or pulled by the end client through HTTP(S) or other remote means. In all circumstances, only the public key persists on the end client, which means that no secrets are stored on the device.

How you host the public key on your end clients depends on what network access they have. For example, an embedded Internet of Things (IoT) device for mining vehicles might never connect to the internet, but could communicate with a central system through a private 5G network. In this circumstance, you would host this public key within that network for retrieval. For other disconnected IoT devices that can connect to the internet, such as smart-home appliances, you might want to host the public key on a web server at a predefined URL or through an API.

Note: Whenever you vend public keys over an untrusted channel, such as when you vend the public key through an API, you should make sure that the key can be verified in some way to confirm that it hasn’t been tampered with. This is typically done by vending keys over an HTTPS connection, where the integrity of the keys is provided by the X.509 certificate that was used in the TLS connection. The X.509 certificate also verifies an association with the key-pair owner, typically by domain name.

Implement the solution

The following steps can be used as a proof-of-concept to guide you through implementing a hybrid-cryptosystem by using a KMS public key on an example device.

Create keys in AWS KMS

In the first step of this solution, you create an RSA asymmetric key pair in AWS KMS (step 1 in the architectural overview). With AWS KMS, you can create key pairs in a variety of dimensions according to your security requirements or standards. For more information, see Choosing a KMS key type in the AWS KMS documentation.

To create a key pair in AWS KMS, use the CreateKey API. For this example, you will create an RSA key pair with RSA_2048 for the CustomerMasterKeySpec parameter and ENCRYPT_DECRYPT for the KeyUsage parameter in the AWS CLI. This post uses 2048-bit keys, but note that AWS KMS allows larger key sizes. The CLI will return a KeyId value that uniquely identifies the KMS key in your account, which you should take note of.

To create a KMS key by using the CLI

  • Enter the following command in the AWS CLI.
    aws kms create-key --key-spec RSA_2048 \
        --key-usage ENCRYPT_DECRYPT \
        --description "Example RSA Encryption Key Pair"

You can follow the Creating asymmetric KMS keys documentation to see how to use the AWS Management Console to create a KMS key pair with the same properties as shown here.

Note: When a KMS key is created, it will be logged by AWS CloudTrail, a service that monitors and records activity within your account. All API calls to the AWS KMS service are logged in CloudTrail, which you can use to audit access to KMS keys.

To allow your KMS key to be identified by a human-readable string rather than KeyId, you can assign an alias for the KMS key (replace the target-key-id value of <1234abcd-12ab-34cd-56ef-1234567890ab> with your KeyId). This makes it easier to use and manage.

To create a KMS key alias for your key by using the CLI

  • Enter the following command in the AWS CLI.
    aws kms create-alias \
        --alias-name alias/example-rsa-key \
        --target-key-id <1234abcd-12ab-34cd-56ef-1234567890ab>
    

Download the public key from AWS KMS

A benefit of asymmetric encryption is that you can distribute a public key to a large, untrusted network, and the public key can only be used for encryption. Decryption of those messages can only be conducted by the corresponding private key. You can use the AWS KMS Encrypt API to encrypt data with a KMS key pair (specifically the public key). However, because the AWS APIs are authenticated by using a signature, you must have access to AWS credentials to use these APIs, which you might not want to do on untrusted devices. Additionally, in a private 5G network, you might not have the capability to call the AWS KMS API endpoints from the end clients. Instead, you can download the public key from a local source or embed that into the end client at the time of manufacture.

To retrieve a copy of the public key from your AWS KMS key pair, you can use the GetPublicKey API. The following example shows how to use this with the AWS CLI command get-public-key and reference the key alias you set earlier.

To view the public key for your KMS key pair by using the CLI

  • Enter the following command in the AWS CLI.
    aws kms get-public-key --key-id alias/example-rsa-key

The return value from this API will contain several elements, including the PublicKey. The returned PublicKey value is the DER-encoded X.509, and because you’re using the AWS CLI, it is base64-encoded for readability purposes. By using the AWS CLI, you can query just the PublicKey return value, base64-decode it, and then save the key to a file on disk, as follows.

To use the AWS CLI to query only the public key, then base64 decode it and output it to a file

  • Enter the following command in the AWS CLI.
    aws kms get-public-key \
    --key-id alias/example-rsa-key \ 
    --output text \ 
    --query PublicKey | base64 -–decode > public_key.der

In this example, the local machine where you saved the public_key.der file will now represent the end-client device.

Note: If you call this API by using one of the AWS SDKs, such as boto3, then the PublicKey value is not base64-encoded.

Create an AES 256-bit symmetric key on the end client

Although the end client now has a copy of the public key from the associated KMS private key, the public key can’t be used for encrypting data that you plan on transmitting, due to the size limits on data that can be encrypted. Instead, you can use symmetric encryption. Typically, symmetric keys are smaller than asymmetric keys, the ciphers are faster when encrypting data, and the resulting ciphertext is similar in size to the original data.

To generate a symmetric key, you should use a source of random entropy. Some operating systems offer block access to hardware-based sources of random numbers, such as /dev/hwrng. To provide an example process in this blog post, you will use the OpenSSL rand utility, which uses a cryptographically secure pseudo random generator (CSPRNG) seeded by /dev/urandom. In production systems, you might have stronger sources of entropy to rely on, or compliance requirements for random number generation. In hardware-constrained environments, you should take extra care to make sure that sources of entropy are cryptographically secure. The following command uses OpenSSL to create an AES 256-bit (32 bytes) key and base64-encode it, then save it to disk in plaintext as key.b64.

Note: Anyone with access to this file system will have access to this key.

To use the OpenSSL rand command to create a symmetric key and output it to a file

  • Enter the following command.
    openssl rand -base64 32 > key.b64

Encrypt the data to be sent from the end client

Now that you have two different key types on the end client, you can use a hybrid cryptosystem to encrypt a large text file. First, you will generate a sample file to encrypt on your system. By outputting some bytes from /dev/urandom, you can create this file to the size you want. The following command outputs 200 random bytes, base64-encodes the file, and writes that to disk in a file called encrypt.me.

To generate a sample file from random data, which will be encrypted later

  • Enter the following command.
    head -c 200 /dev/urandom | base64 –-wrap=0 > encrypt.me

Next, you will encrypt the newly created file with the AES 256-bit key that you created earlier (which is base64-encoded). By using the OpenSSL command line, you will encrypt the file on disk and create a new file called encrypt.me.enc.

Note: For demonstration purposes, this solution uses OpenSSL to complete the encryption process. However, the command line OpenSSL enc utility doesn’t allow the cipher aes-256-gcm. Galois Counter Mode (GCM) is recommended when encrypting and sending data, because it includes authentication, so that that the ciphertext can’t be tampered with in transit. Instead, for this demonstration, you will use aes-256-cbc, which is not authenticated.

To use the OpenSSL enc command to encrypt your sample file with a symmetric key

  • • Enter the following command.
    openssl enc -aes-256-cbc \
    -in encrypt.me -out encrypt.me.enc \
    -pass file:./key.b64

Encrypt the AES 256-bit key

So that the data can be decrypted again, you will need to share the same AES 256-bit key with the recipient. To share that with only the person who can use the KMS private key that you created earlier, you can encrypt the symmetric key (key.b64) with the RSA public key that you retrieved earlier (public_key.der).

Again, you will use OpenSSL to see how this works and the required cipher options. When encrypting or decrypting with a KMS RSA key pair, you can use one of two encryption algorithms, either RSAES_OAEP_SHA_1 or RSAES_OAEP_SHA_256. These identify the cipher suites used in encryption that are currently supported by AWS KMS for encryption.

To use the OpenSSL pkeyutl command to encrypt your symmetric key with your local copy of your KMS public key

  • Enter the following command.
    openssl pkeyutl \
    	-in key.b64 -out key.b64.enc \
    	-inkey public_key.der -keyform DER -pubin -encrypt \
    	-pkeyopt rsa_padding_mode:oaep -pkeyopt rsa_oaep_md:sha256

This command creates a new file on disk called key.b64.enc. This file is the encrypted AES 256-bit key, which can now be transported securely across an insecure network, such as the internet. The last two options in the command define the padding mode used (OAEP) and the length of the message digest (SHA-256), which align with the options available to decrypt when you use the AWS KMS APIs.

Note: You should securely delete both the original payload file (encrypt.me) and the plaintext AES 256-bit key (key.b64) if you want to prevent anyone else from accessing these files. At this point, you will have three files on disk: public_key.der, encrypt.me.enc, and key.b64.enc. If you want to verify the decryption process later in this example, keep these files.

In production, you might never write any of these values to disk. Instead, you can keep all values in memory and only write the encrypted data (ciphertext) to disk, clearing memory after that process has completed.

You can now use the method of your choice to transfer the encrypted files across an unsecured network without compromising the privacy of those files. For smart-home appliance use cases, you can upload the encrypted files in Amazon Simple Storage Service (Amazon S3), a highly durable storage system that can be accessed from the internet, keeping in mind the preventative security practices that AWS recommends. Later, another service can pull these files from S3, and with the correct permissions for the KMS key, can decrypt the files by using the AWS KMS Decrypt API.

Decrypt the files

With access to the decrypt operation for the KMS key and the encrypted files, you can now retrieve the plaintext data file again. To do this, you will replicate the preceding steps, but in reverse. This involves decrypting the AWS 256-bit key by using the AWS KMS API, and then using that result to decrypt the encrypted data. You will need access to the AWS KMS API to complete these actions, because the private key exists in plaintext only within the AWS KMS HSMs.

To decrypt the files

  1. The first step is to decrypt the AWS 256-bit key. You will need to use the AWS CLI to submit the key.b64.enc file to the AWS KMS API, and specify the algorithm you used to encrypt the file (RSAES_OAEP_SHA_256). Use the following command to retrieve the AES 256-bit key in plaintext. Again, you’re using the –query selector to output only the plaintext, and then decode the base64 value.
    aws kms decrypt --key-id alias/example-rsa-key \ 
    		--ciphertext-blob fileb://key.b64.enc \
    		--encryption-algorithm RSAES_OAEP_SHA_256 --output text \
    		--query 'Plaintext' | base64 --decode > decrypted_key.b64

  2. The final step in decrypting the data is to reverse the CBC encryption process you used in OpenSSL. If another mode of symmetric encryption was used, such as AES-GCM, then you would need to decrypt by using that algorithm and the input AES 256-bit key. Use the following OpenSSL command to retrieve the original plaintext payload.
    openssl enc -d -aes-256-cbc \
    		-in encrypt.me.enc -out decrypted.file \
    		-pass file:./decrypted_key.b64

Conclusion

In this post, you learned how to combine AWS KMS asymmetric key pairs with locally created symmetric keys to encrypt and share data that exceeds 190 bytes, without storing a secret on a client device. By taking advantage of the RSA cryptosystem for offline encryption, you can reduce the exposure of plaintext data or secrets to devices outside of your control, and without having to complete complex key exchanges. By using the steps in this solution, you can more securely share large amounts of data, such as update files or configuration settings. To learn more about the asymmetric keys feature of AWS KMS, refer to the AWS KMS Developer Guide. If you have questions about the asymmetric keys feature, interact with us through AWS re:Post.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Patrick Palmer

Patrick is a security solutions architect at AWS. He has a passion for learning new technologies and cryptography across AWS services and having deep conversations with customers. He works on a team of security specialists who strive to continually delight customers. Outside of work, he spends time with his wife and two cats, occasionally playing video games when he can.

Breaking RSA through Insufficiently Random Primes

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/03/breaking-rsa-through-insufficiently-random-primes.html

Basically, the SafeZone library doesn’t sufficiently randomize the two prime numbers it used to generate RSA keys. They’re too close to each other, which makes them vulnerable to recovery.

There aren’t many weak keys out there, but there are some:

So far, Böck has identified only a handful of keys in the wild that are vulnerable to the factorization attack. Some of the keys are from printers from two manufacturers, Canon and Fujifilm (originally branded as Fuji Xerox). Printer users can use the keys to generate a Certificate Signing Request. The creation date for the all the weak keys was 2020 or later. The weak Canon keys are tracked as CVE-2022-26351.

Böck also found four vulnerable PGP keys, typically used to encrypt email, on SKS PGP key servers. A user ID tied to the keys implied they were created for testing, so he doesn’t believe they’re in active use.

No, RSA Is Not Broken

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/03/no-rsa-is-not-broken.html

I have been seeing this paper by cryptographer Peter Schnorr making the rounds: “Fast Factoring Integers by SVP Algorithms.” It describes a new factoring method, and its abstract ends with the provocative sentence: “This destroys the RSA cryptosystem.”

It does not. At best, it’s an improvement in factoring — and I’m not sure it’s even that. The paper is a preprint: it hasn’t been peer reviewed. Be careful taking its claims at face value.

Some discussion here.

I’ll append more analysis links to this post when I find them.

How to protect sensitive data for its entire lifecycle in AWS

Post Syndicated from Raj Jain original https://aws.amazon.com/blogs/security/how-to-protect-sensitive-data-for-its-entire-lifecycle-in-aws/

Many Amazon Web Services (AWS) customer workflows require ingesting sensitive and regulated data such as Payments Card Industry (PCI) data, personally identifiable information (PII), and protected health information (PHI). In this post, I’ll show you a method designed to protect sensitive data for its entire lifecycle in AWS. This method can help enhance your data security posture and be useful for fulfilling the data privacy regulatory requirements applicable to your organization for data protection at-rest, in-transit, and in-use.

An existing method for sensitive data protection in AWS is to use the field-level encryption feature offered by Amazon CloudFront. This CloudFront feature protects sensitive data fields in requests at the AWS network edge. The chosen fields are protected upon ingestion and remain protected throughout the entire application stack. The notion of protecting sensitive data early in its lifecycle in AWS is a highly desirable security architecture. However, CloudFront can protect a maximum of 10 fields and only within HTTP(S) POST requests that carry HTML form encoded payloads.

If your requirements exceed CloudFront’s native field-level encryption feature, such as a need to handle diverse application payload formats, different HTTP methods, and more than 10 sensitive fields, you can implement field-level encryption yourself using the [email protected] feature in CloudFront. In terms of choosing an appropriate encryption scheme, this problem calls for an asymmetric cryptographic system that will allow public keys to be openly distributed to the CloudFront network edges while keeping the corresponding private keys stored securely within the network core. One such popular asymmetric cryptographic system is RSA. Accordingly, we’ll implement a [email protected] function that uses asymmetric encryption using the RSA cryptosystem to protect an arbitrary number of fields in any HTTP(S) request. We will discuss the solution using an example JSON payload, although this approach can be applied to any payload format.

A complex part of any encryption solution is key management. To address that, I use AWS Key Management Service (AWS KMS). AWS KMS simplifies the solution and offers improved security posture and operational benefits, detailed later.

Solution overview

You can protect data in-transit over individual communications channels using transport layer security (TLS), and at-rest in individual storage silos using volume encryption, object encryption or database table encryption. However, if you have sensitive workloads, you might need additional protection that can follow the data as it moves through the application stack. Fine-grained data protection techniques such as field-level encryption allow for the protection of sensitive data fields in larger application payloads while leaving non-sensitive fields in plaintext. This approach lets an application perform business functions on non-sensitive fields without the overhead of encryption, and allows fine-grained control over what fields can be accessed by what parts of the application.

A best practice for protecting sensitive data is to reduce its exposure in the clear throughout its lifecycle. This means protecting data as early as possible on ingestion and ensuring that only authorized users and applications can access the data only when and as needed. CloudFront, when combined with the flexibility provided by [email protected], provides an appropriate environment at the edge of the AWS network to protect sensitive data upon ingestion in AWS.

Since the downstream systems don’t have access to sensitive data, data exposure is reduced, which helps to minimize your compliance footprint for auditing purposes.

The number of sensitive data elements that may need field-level encryption depends on your requirements. For example:

  • For healthcare applications, HIPAA regulates 18 personal data elements.
  • In California, the California Consumer Privacy Act (CCPA) regulates at least 11 categories of personal information—each with its own set of data elements.

The idea behind field-level encryption is to protect sensitive data fields individually, while retaining the structure of the application payload. The alternative is full payload encryption, where the entire application payload is encrypted as a binary blob, which makes it unusable until the entirety of it is decrypted. With field-level encryption, the non-sensitive data left in plaintext remains usable for ordinary business functions. When retrofitting data protection in existing applications, this approach can reduce the risk of application malfunction since the data format is maintained.

The following figure shows how PII data fields in a JSON construction that are deemed sensitive by an application can be transformed from plaintext to ciphertext with a field-level encryption mechanism.

Figure 1: Example of field-level encryption

Figure 1: Example of field-level encryption

You can change plaintext to ciphertext as depicted in Figure 1 by using a [email protected] function to perform field-level encryption. I discuss the encryption and decryption processes separately in the following sections.

Field-level encryption process

Let’s discuss the individual steps involved in the encryption process as shown in Figure 2.

Figure 2: Field-level encryption process

Figure 2: Field-level encryption process

Figure 2 shows CloudFront invoking a [email protected] function while processing a client request. CloudFront offers multiple integration points for invoking [email protected] functions. Since you are processing a client request and your encryption behavior is related to requests being forwarded to an origin server, you want your function to run upon the origin request event in CloudFront. The origin request event represents an internal state transition in CloudFront that happens immediately before CloudFront forwards a request to the downstream origin server.

You can associate your [email protected] with CloudFront as described in Adding Triggers by Using the CloudFront Console. A screenshot of the CloudFront console is shown in Figure 3. The selected event type is Origin Request and the Include Body check box is selected so that the request body is conveyed to [email protected]

Figure 3: Configuration of Lambda@Edge in CloudFront

Figure 3: Configuration of [email protected] in CloudFront

The [email protected] function acts as a programmable hook in the CloudFront request processing flow. You can use the function to replace the incoming request body with a request body with the sensitive data fields encrypted.

The process includes the following steps:

Step 1 – RSA key generation and inclusion in [email protected]

You can generate an RSA customer managed key (CMK) in AWS KMS as described in Creating asymmetric CMKs. This is done at system configuration time.

Note: You can use your existing RSA key pairs or generate new ones externally by using OpenSSL commands, especially if you need to perform RSA decryption and key management independently of AWS KMS. Your choice won’t affect the fundamental encryption design pattern presented here.

The RSA key creation in AWS KMS requires two inputs: key length and type of usage. In this example, I created a 2048-bit key and assigned its use for encryption and decryption. The cryptographic configuration of an RSA CMK created in AWS KMS is shown in Figure 4.

Figure 4: Cryptographic properties of an RSA key managed by AWS KMS

Figure 4: Cryptographic properties of an RSA key managed by AWS KMS

Of the two encryption algorithms shown in Figure 4— RSAES_OAEP_SHA_256 and RSAES_OAEP_SHA_1, this example uses RSAES_OAEP_SHA_256. The combination of a 2048-bit key and the RSAES_OAEP_SHA_256 algorithm lets you encrypt a maximum of 190 bytes of data, which is enough for most PII fields. You can choose a different key length and encryption algorithm depending on your security and performance requirements. How to choose your CMK configuration includes information about RSA key specs for encryption and decryption.

Using AWS KMS for RSA key management versus managing the keys yourself eliminates that complexity and can help you:

  • Enforce IAM and key policies that describe administrative and usage permissions for keys.
  • Manage cross-account access for keys.
  • Monitor and alarm on key operations through Amazon CloudWatch.
  • Audit AWS KMS API invocations through AWS CloudTrail.
  • Record configuration changes to keys and enforce key specification compliance through AWS Config.
  • Generate high-entropy keys in an AWS KMS hardware security module (HSM) as required by NIST.
  • Store RSA private keys securely, without the ability to export.
  • Perform RSA decryption within AWS KMS without exposing private keys to application code.
  • Categorize and report on keys with key tags for cost allocation.
  • Disable keys and schedule their deletion.

You need to extract the RSA public key from AWS KMS so you can include it in the AWS Lambda deployment package. You can do this from the AWS Management Console, through the AWS KMS SDK, or by using the get-public-key command in the AWS Command Line Interface (AWS CLI). Figure 5 shows Copy and Download options for a public key in the Public key tab of the AWS KMS console.

Figure 5: RSA public key available for copy or download in the console

Figure 5: RSA public key available for copy or download in the console

Note: As we will see in the sample code in step 3, we embed the public key in the [email protected] deployment package. This is a permissible practice because public keys in asymmetric cryptography systems aren’t a secret and can be freely distributed to entities that need to perform encryption. Alternatively, you can use [email protected] to query AWS KMS for the public key at runtime. However, this introduces latency, increases the load against your KMS account quota, and increases your AWS costs. General patterns for using external data in [email protected] are described in Leveraging external data in [email protected].

Step 2 – HTTP API request handling by CloudFront

CloudFront receives an HTTP(S) request from a client. CloudFront then invokes [email protected] during origin-request processing and includes the HTTP request body in the invocation.

Step 3 – [email protected] processing

The [email protected] function processes the HTTP request body. The function extracts sensitive data fields and performs RSA encryption over their values.

The following code is sample source code for the [email protected] function implemented in Python 3.7:

import Crypto
import base64
import json
from Crypto.Cipher import PKCS1_OAEP
from Crypto.PublicKey import RSA

# PEM-formatted RSA public key copied over from AWS KMS or your own public key.
RSA_PUBLIC_KEY = "-----BEGIN PUBLIC KEY-----<your key>-----END PUBLIC KEY-----"
RSA_PUBLIC_KEY_OBJ = RSA.importKey(RSA_PUBLIC_KEY)
RSA_CIPHER_OBJ = PKCS1_OAEP.new(RSA_PUBLIC_KEY_OBJ, Crypto.Hash.SHA256)

# Example sensitive data field names in a JSON object. 
PII_SENSITIVE_FIELD_NAMES = ["fname", "lname", "email", "ssn", "dob", "phone"]

CIPHERTEXT_PREFIX = "#01#"
CIPHERTEXT_SUFFIX = "#10#"

def lambda_handler(event, context):
    # Extract HTTP request and its body as per documentation:
    # https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-event-structure.html
    http_request = event['Records'][0]['cf']['request']
    body = http_request['body']
    org_body = base64.b64decode(body['data'])
    mod_body = protect_sensitive_fields_json(org_body)
    body['action'] = 'replace'
    body['encoding'] = 'text'
    body['data'] = mod_body
    return http_request


def protect_sensitive_fields_json(body):
    # Encrypts sensitive fields in sample JSON payload shown earlier in this post.
    # [{"fname": "Alejandro", "lname": "Rosalez", … }]
    person_list = json.loads(body.decode("utf-8"))
    for person_data in person_list:
        for field_name in PII_SENSITIVE_FIELD_NAMES:
            if field_name not in person_data:
                continue
            plaintext = person_data[field_name]
            ciphertext = RSA_CIPHER_OBJ.encrypt(bytes(plaintext, 'utf-8'))
            ciphertext_b64 = base64.b64encode(ciphertext).decode()
            # Optionally, add unique prefix/suffix patterns to ciphertext
            person_data[field_name] = CIPHERTEXT_PREFIX + ciphertext_b64 + CIPHERTEXT_SUFFIX 
    return json.dumps(person_list)

The event structure passed into the [email protected] function is described in [email protected] Event Structure. Following the event structure, you can extract the HTTP request body. In this example, the assumption is that the HTTP payload carries a JSON document based on a particular schema defined as part of the API contract. The input JSON document is parsed by the function, converting it into a Python dictionary. The Python native dictionary operators are then used to extract the sensitive field values.

Note: If you don’t know your API payload structure ahead of time or you’re dealing with unstructured payloads, you can use techniques such as regular expression pattern searches and checksums to look for patterns of sensitive data and target them accordingly. For example, credit card primary account numbers include a Luhn checksum that can be programmatically detected. Additionally, services such as Amazon Comprehend and Amazon Macie can be leveraged for detecting sensitive data such as PII in application payloads.

While iterating over the sensitive fields, individual field values are encrypted using the standard RSA encryption implementation available in the Python Cryptography Toolkit (PyCrypto). The PyCrypto module is included within the [email protected] zip archive as described in [email protected] deployment package.

The example uses the standard optimal asymmetric encryption padding (OAEP) and SHA-256 encryption algorithm properties. These properties are supported by AWS KMS and will allow RSA ciphertext produced here to be decrypted by AWS KMS later.

Note: You may have noticed in the code above that we’re bracketing the ciphertexts with predefined prefix and suffix strings:

person_data[field_name] = CIPHERTEXT_PREFIX + ciphertext_b64 + CIPHERTEXT_SUFFIX

This is an optional measure and is being implemented to simplify the decryption process.

The prefix and suffix strings help demarcate ciphertext embedded in unstructured data in downstream processing and also act as embedded metadata. Unique prefix and suffix strings allow you to extract ciphertext through string or regular expression (regex) searches during the decryption process without having to know the data body format or schema, or the field names that were encrypted.

Distinct strings can also serve as indirect identifiers of RSA key pair identifiers. This can enable key rotation and allow separate keys to be used for separate fields depending on the data security requirements for individual fields.

You can ensure that the prefix and suffix strings can’t collide with the ciphertext by bracketing them with characters that don’t appear in cyphertext. For example, a hash (#) character cannot be part of a base64 encoded ciphertext string.

Deploying a Lambda function as a [email protected] function requires specific IAM permissions and an IAM execution role. Follow the [email protected] deployment instructions in Setting IAM Permissions and Roles for [email protected].

Step 4 – [email protected] response

The [email protected] function returns the modified HTTP body back to CloudFront and instructs it to replace the original HTTP body with the modified one by setting the following flag:

http_request['body']['action'] = 'replace'

Step 5 – Forward the request to the origin server

CloudFront forwards the modified request body provided by [email protected] to the origin server. In this example, the origin server writes the data body to persistent storage for later processing.

Field-level decryption process

An application that’s authorized to access sensitive data for a business function can decrypt that data. An example decryption process is shown in Figure 6. The figure shows a Lambda function as an example compute environment for invoking AWS KMS for decryption. This functionality isn’t dependent on Lambda and can be performed in any compute environment that has access to AWS KMS.

Figure 6: Field-level decryption process

Figure 6: Field-level decryption process

The steps of the process shown in Figure 6 are described below.

Step 1 – Application retrieves the field-level encrypted data

The example application retrieves the field-level encrypted data from persistent storage that had been previously written during the data ingestion process.

Step 2 – Application invokes the decryption Lambda function

The application invokes a Lambda function responsible for performing field-level decryption, sending the retrieved data to Lambda.

Step 3 – Lambda calls the AWS KMS decryption API

The Lambda function uses AWS KMS for RSA decryption. The example calls the KMS decryption API that inputs ciphertext and returns plaintext. The actual decryption happens in KMS; the RSA private key is never exposed to the application, which is a highly desirable characteristic for building secure applications.

Note: If you choose to use an external key pair, then you can securely store the RSA private key in AWS services like AWS Systems Manager Parameter Store or AWS Secrets Manager and control access to the key through IAM and resource policies. You can fetch the key from relevant vault using the vault’s API, then decrypt using the standard RSA implementation available in your programming language. For example, the cryptography toolkit in Python or javax.crypto in Java.

The Lambda function Python code for decryption is shown below.

import base64
import boto3
import re

kms_client = boto3.client('kms')
CIPHERTEXT_PREFIX = "#01#"
CIPHERTEXT_SUFFIX = "#10#"

# This lambda function extracts event body, searches for and decrypts ciphertext 
# fields surrounded by provided prefix and suffix strings in arbitrary text bodies 
# and substitutes plaintext fields in-place.  
def lambda_handler(event, context):    
    org_data = event["body"]
    mod_data = unprotect_fields(org_data, CIPHERTEXT_PREFIX, CIPHERTEXT_SUFFIX)
    return mod_data

# Helper function that performs non-greedy regex search for ciphertext strings on
# input data and performs RSA decryption of them using AWS KMS 
def unprotect_fields(org_data, prefix, suffix):
    regex_pattern = prefix + "(.*?)" + suffix
    mod_data_parts = []
    cursor = 0

    # Search ciphertexts iteratively using python regular expression module
    for match in re.finditer(regex_pattern, org_data):
        mod_data_parts.append(org_data[cursor: match.start()])
        try:
            # Ciphertext was stored as Base64 encoded in our example. Decode it.
            ciphertext = base64.b64decode(match.group(1))

            # Decrypt ciphertext using AWS KMS  
            decrypt_rsp = kms_client.decrypt(
                EncryptionAlgorithm="RSAES_OAEP_SHA_256",
                KeyId="<Your-Key-ID>",
                CiphertextBlob=ciphertext)
            decrypted_val = decrypt_rsp["Plaintext"].decode("utf-8")
            mod_data_parts.append(decrypted_val)
        except Exception as e:
            print ("Exception: " + str(e))
            return None
        cursor = match.end()

    mod_data_parts.append(org_data[cursor:])
    return "".join(mod_data_parts)

The function performs a regular expression search in the input data body looking for ciphertext strings bracketed in predefined prefix and suffix strings that were added during encryption.

While iterating over ciphertext strings one-by-one, the function calls the AWS KMS decrypt() API. The example function uses the same RSA encryption algorithm properties—OAEP and SHA-256—and the Key ID of the public key that was used during encryption in [email protected]

Note that the Key ID itself is not a secret. Any application can be configured with it, but that doesn’t mean any application will be able to perform decryption. The security control here is that the AWS KMS key policy must allow the caller to use the Key ID to perform the decryption. An additional security control is provided by Lambda execution role that should allow calling the KMS decrypt() API.

Step 4 – AWS KMS decrypts ciphertext and returns plaintext

To ensure that only authorized users can perform decrypt operation, the KMS is configured as described in Using key policies in AWS KMS. In addition, the Lambda IAM execution role is configured as described in AWS Lambda execution role to allow it to access KMS. If both the key policy and IAM policy conditions are met, KMS returns the decrypted plaintext. Lambda substitutes the plaintext in place of ciphertext in the encapsulating data body.

Steps three and four are repeated for each ciphertext string.

Step 5 – Lambda returns decrypted data body

Once all the ciphertext has been converted to plaintext and substituted in the larger data body, the Lambda function returns the modified data body to the client application.

Conclusion

In this post, I demonstrated how you can implement field-level encryption integrated with AWS KMS to help protect sensitive data workloads for their entire lifecycle in AWS. Since your [email protected] is designed to protect data at the network edge, data remains protected throughout the application execution stack. In addition to improving your data security posture, this protection can help you comply with data privacy regulations applicable to your organization.

Since you author your own [email protected] function to perform standard RSA encryption, you have flexibility in terms of payload formats and the number of fields that you consider to be sensitive. The integration with AWS KMS for RSA key management and decryption provides significant simplicity, higher key security, and rich integration with other AWS security services enabling an overall strong security solution.

By using encrypted fields with identifiers as described in this post, you can create fine-grained controls for data accessibility to meet the security principle of least privilege. Instead of granting either complete access or no access to data fields, you can ensure least privileges where a given part of an application can only access the fields that it needs, when it needs to, all the way down to controlling access field by field. Field by field access can be enabled by using different keys for different fields and controlling their respective policies.

In addition to protecting sensitive data workloads to meet regulatory and security best practices, this solution can be used to build de-identified data lakes in AWS. Sensitive data fields remain protected throughout their lifecycle, while non-sensitive data fields remain in the clear. This approach can allow analytics or other business functions to operate on data without exposing sensitive data.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Raj Jain

Raj is a Senior Cloud Architect at AWS. He is passionate about helping customers build well-architected applications in AWS. Raj is a published author in Bell Labs Technical Journal, has authored 3 IETF standards, and holds 12 patents in internet telephony and applied cryptography. In his spare time, Raj enjoys outdoors, cooking, reading, and travel.

Brexit Deal Mandates Old Insecure Crypto Algorithms

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/brexit-deal-mandates-old-insecure-crypto-algorithms.html

In what is surely an unthinking cut-and-paste issue, page 921 of the Brexit deal mandates the use of SHA-1 and 1024-bit RSA:

The open standard s/MIME as extension to de facto e-mail standard SMTP will be deployed to encrypt messages containing DNA profile information. The protocol s/MIME (V3) allows signed receipts, security labels, and secure mailing lists… The underlying certificate used by s/MIME mechanism has to be in compliance with X.509 standard…. The processing rules for s/MIME encryption operations… are as follows:

  1. the sequence of the operations is: first encryption and then signing,
  2. the encryption algorithm AES (Advanced Encryption Standard) with 256 bit key length and RSA with 1,024 bit key length shall be applied for symmetric and asymmetric encryption respectively,
  3. the hash algorithm SHA-1 shall be applied.
  4. s/MIME functionality is built into the vast majority of modern e-mail software packages including Outlook, Mozilla Mail as well as Netscape Communicator 4.x and inter-operates among all major e-mail software packages.

And s/MIME? Bleah.

RSA-250 Factored

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/rsa-250_factore.html

RSA-250 has been factored.

This computation was performed with the Number Field Sieve algorithm,
using the open-source CADO-NFS software.

The total computation time was roughly 2700 core-years, using Intel Xeon Gold 6130 CPUs as a reference (2.1GHz):

RSA-250 sieving: 2450 physical core-years
RSA-250 matrix: 250 physical core-years

The computation involved tens of thousands of machines worldwide, and was completed in a few months.

News article. On the factoring challenges.

Chinese Hackers Bypassing Two-Factor Authentication

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/12/chinese_hackers_1.html

Interesting story of how a Chinese state-sponsored hacking group is bypassing the RSA SecurID two-factor authentication system.

How they did it remains unclear; although, the Fox-IT team has their theory. They said APT20 stole an RSA SecurID software token from a hacked system, which the Chinese actor then used on its computers to generate valid one-time codes and bypass 2FA at will.

Normally, this wouldn’t be possible. To use one of these software tokens, the user would need to connect a physical (hardware) device to their computer. The device and the software token would then generate a valid 2FA code. If the device was missing, the RSA SecureID software would generate an error.

The Fox-IT team explains how hackers might have gone around this issue:

The software token is generated for a specific system, but of course this system specific value could easily be retrieved by the actor when having access to the system of the victim.

As it turns out, the actor does not actually need to go through the trouble of obtaining the victim’s system specific value, because this specific value is only checked when importing the SecurID Token Seed, and has no relation to the seed used to generate actual 2-factor tokens. This means the actor can actually simply patch the check which verifies if the imported soft token was generated for this system, and does not need to bother with stealing the system specific value at all.

In short, all the actor has to do to make use of the 2 factor authentication codes is to steal an RSA SecurID Software Token and to patch 1 instruction, which results in the generation of valid tokens.

RSA-240 Factored

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/12/rsa-240_factore.html

This just in:

We are pleased to announce the factorization of RSA-240, from RSA’s challenge list, and the computation of a discrete logarithm of the same size (795 bits):

RSA-240 = 12462036678171878406583504460810659043482037465167880575481878888328 966680118821085503603957027250874750986476843845862105486553797025393057189121 768431828636284694840530161441643046806687569941524699318570418303051254959437 1372159029236099 = 509435952285839914555051023580843714132648382024111473186660296521821206469746 700620316443478873837606252372049619334517 * 244624208838318150567813139024002896653802092578931401452041221336558477095178 155258218897735030590669041302045908071447

[…]

The previous records were RSA-768 (768 bits) in December 2009 [2], and a 768-bit prime discrete logarithm in June 2016 [3].

It is the first time that two records for integer factorization and discrete logarithm are broken together, moreover with the same hardware and software.

Both computations were performed with the Number Field Sieve algorithm, using the open-source CADO-NFS software [4].

The sum of the computation time for both records is roughly 4000 core-years, using Intel Xeon Gold 6130 CPUs as a reference (2.1GHz). A rough breakdown of the time spent in the main computation steps is as follows.

RSA-240 sieving: 800 physical core-years
RSA-240 matrix: 100 physical core-years
DLP-240 sieving: 2400 physical core-years
DLP-240 matrix: 700 physical core-years

The computation times above are well below the time that was spent with the previous 768-bit records. To measure how much of this can be attributed to Moore’s law, we ran our software on machines that are identical to those cited in the 768-bit DLP computation [3], and reach the conclusion that sieving for our new record size on these old machines would have taken 25% less time than the reported sieving time of the 768-bit DLP computation.

EDITED TO ADD (12/4): News article. Dan Goodin points out that the speed improvements were more due to improvements in the algorithms than from Moore’s Law.

Factoring 2048-bit Numbers Using 20 Million Qubits

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/factoring_2048-.html

This theoretical paper shows how to factor 2048-bit RSA moduli with a 20-million qubit quantum computer in eight hours. It’s interesting work, but I don’t want overstate the risk.

We know from Shor’s Algorithm that both factoring and discrete logs are easy to solve on a large, working quantum computer. Both of those are currently beyond our technological abilities. We barely have quantum computers with 50 to 100 qubits. Extending this requires advances not only in the number of qubits we can work with, but in making the system stable enough to read any answers. You’ll hear this called “error rate” or “coherence” — this paper talks about “noise.”

Advances are hard. At this point, we don’t know if they’re “send a man to the moon” hard or “faster-than-light travel” hard. If I were guessing, I would say they’re the former, but still harder than we can accomplish with our current understanding of physics and technology.

I write about all this generally, and in detail, here. (Short summary: Our work on quantum-resistant algorithms is outpacing our work on quantum computers, so we’ll be fine in the short run. But future theoretical work on quantum computing could easily change what “quantum resistant” means, so it’s possible that public-key cryptography will simply not be possible in the long run. That’s not terrible, though; we have a lot of good scalable secret-key systems that do much the same things.)

Videos and Links from the Public-Interest Technology Track at the RSA Conference

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/videos_and_link.html

Yesterday at the RSA Conference, I gave a keynote talk about the role of public-interest technologists in cybersecurity. (Video here).

I also hosted a one-day mini-track on the topic. We had six panels, and they were all great. If you missed it live, we have videos:

  • How Public Interest Technologists are Changing the World: Matt Mitchell, Tactical Tech; Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School; and J. Bob Alotta, Astraea Foundation (Moderator). (Video here.)
  • Public Interest Tech in Silicon Valley: Mitchell Baker, Chairwoman, Mozilla Corporation; Cindy Cohn, EFF; and Lucy Vasserman, Software Engineer, Google. (Video here.)

  • Working in Civil Society: Sarah Aoun, Digital Security Technologist; Peter Eckersley, Partnership on AI; Harlo Holmes, Director of Newsroom Digital Security, Freedom of the Press Foundation; and John Scott-Railton, Senior Researcher, Citizen Lab. (Video here.)

  • Government Needs You: Travis Moore, TechCongress; Hashim Mteuzi, Senior Manager, Network Talent Initiative, Code for America; Gigi Sohn, Distinguished Fellow, Georgetown Law Institute for Technology, Law and Policy; and Ashkan Soltani, Independent Consultant. (Video here.)

  • Changing Academia: Latanya Sweeney, Harvard; Dierdre Mulligan, UC Berkeley; and Danny Weitzner, MIT CSAIL. (Video here.)

  • The Future of Public Interest Tech: Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School; Ben Wizner, ACLU; and Jenny Toomey, Director, Internet Freedom, Ford Foundation (Moderator). (Video here.)

I also conducted eight short video interviews with different people involved in public-interest technology: independent security technologist Sarah Aoun, TechCongress’s Travis Moore, Ford Foundation’s Jenny Toomey, CitizenLab’s John-Scott Railton, Dierdre Mulligan from UC Berkeley, ACLU’s Jon Callas, Matt Mitchell of TacticalTech, and Kelley Misata from Sightline Security.

Here is my blog post about the event. Here’s Ford Foundation’s blog post on why they helped me organize the event.

We got some good press coverage about the event. (Hey MeriTalk: you spelled my name wrong.)

Related: Here’s my longer essay on the need for public-interest technologists in Internet security, and my public-interest technology resources page.

And just so we have all the URLs in one place, here is a page from the RSA Conference website with links to all of the videos.

If you liked this mini-track, please rate it highly on your RSA Conference evaluation form. I’d like to do it again next year.

Cybersecurity for the Public Interest

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/cybersecurity_f_1.html

The Crypto Wars have been waging off-and-on for a quarter-century. On one side is law enforcement, which wants to be able to break encryption, to access devices and communications of terrorists and criminals. On the other are almost every cryptographer and computer security expert, repeatedly explaining that there’s no way to provide this capability without also weakening the security of every user of those devices and communications systems.

It’s an impassioned debate, acrimonious at times, but there are real technologies that can be brought to bear on the problem: key-escrow technologies, code obfuscation technologies, and backdoors with different properties. Pervasive surveillance capitalism — ­as practiced by the Internet companies that are already spying on everyone­ — matters. So does society’s underlying security needs. There is a security benefit to giving access to law enforcement, even though it would inevitably and invariably also give that access to others. However, there is also a security benefit of having these systems protected from all attackers, including law enforcement. These benefits are mutually exclusive. Which is more important, and to what degree?

The problem is that almost no policymakers are discussing this policy issue from a technologically informed perspective, and very few technologists truly understand the policy contours of the debate. The result is both sides consistently talking past each other, and policy proposals — ­that occasionally become law­ — that are technological disasters.

This isn’t sustainable, either for this issue or any of the other policy issues surrounding Internet security. We need policymakers who understand technology, but we also need cybersecurity technologists who understand­ — and are involved in — ­policy. We need public-interest technologists.

Let’s pause at that term. The Ford Foundation defines public-interest technologists as “technology practitioners who focus on social justice, the common good, and/or the public interest.” A group of academics recently wrote that public-interest technologists are people who “study the application of technology expertise to advance the public interest, generate public benefits, or promote the public good.” Tim Berners-Lee has called them “philosophical engineers.” I think of public-interest technologists as people who combine their technological expertise with a public-interest focus: by working on tech policy, by working on a tech project with a public benefit, or by working as a traditional technologist for an organization with a public benefit. Maybe it’s not the best term­ — and I know not everyone likes it­ — but it’s a decent umbrella term that can encompass all these roles.

We need public-interest technologists in policy discussions. We need them on congressional staff, in federal agencies, at non-governmental organizations (NGOs), in academia, inside companies, and as part of the press. In our field, we need them to get involved in not only the Crypto Wars, but everywhere cybersecurity and policy touch each other: the vulnerability equities debate, election security, cryptocurrency policy, Internet of Things safety and security, big data, algorithmic fairness, adversarial machine learning, critical infrastructure, and national security. When you broaden the definition of Internet security, many additional areas fall within the intersection of cybersecurity and policy. Our particular expertise and way of looking at the world is critical for understanding a great many technological issues, such as net neutrality and the regulation of critical infrastructure. I wouldn’t want to formulate public policy about artificial intelligence and robotics without a security technologist involved.

Public-interest technology isn’t new. Many organizations are working in this area, from older organizations like EFF and EPIC to newer ones like Verified Voting and Access Now. Many academic classes and programs combine technology and public policy. My cybersecurity policy class at the Harvard Kennedy School is just one example. Media startups like The Markup are doing technology-driven journalism. There are even programs and initiatives related to public-interest technology inside for-profit corporations.

This might all seem like a lot, but it’s really not. There aren’t enough people doing it, there aren’t enough people who know it needs to be done, and there aren’t enough places to do it. We need to build a world where there is a viable career path for public-interest technologists.

There are many barriers. There’s a report titled A Pivotal Moment that includes this quote: “While we cite individual instances of visionary leadership and successful deployment of technology skill for the public interest, there was a consensus that a stubborn cycle of inadequate supply, misarticulated demand, and an inefficient marketplace stymie progress.”

That quote speaks to the three places for intervention. One: the supply side. There just isn’t enough talent to meet the eventual demand. This is especially acute in cybersecurity, which has a talent problem across the field. Public-interest technologists are a diverse and multidisciplinary group of people. Their backgrounds come from technology, policy, and law. We also need to foster diversity within public-interest technology; the populations using the technology must be represented in the groups that shape the technology. We need a variety of ways for people to engage in this sphere: ways people can do it on the side, for a couple of years between more traditional technology jobs, or as a full-time rewarding career. We need public-interest technology to be part of every core computer-science curriculum, with “clinics” at universities where students can get a taste of public-interest work. We need technology companies to give people sabbaticals to do this work, and then value what they’ve learned and done.

Two: the demand side. This is our biggest problem right now; not enough organizations understand that they need technologists doing public-interest work. We need jobs to be funded across a wide variety of NGOs. We need staff positions throughout the government: executive, legislative, and judiciary branches. President Obama’s US Digital Service should be expanded and replicated; so should Code for America. We need more press organizations that perform this kind of work.

Three: the marketplace. We need job boards, conferences, and skills exchanges­ — places where people on the supply side can learn about the demand.

Major foundations are starting to provide funding in this space: the Ford and MacArthur Foundations in particular, but others as well.

This problem in our field has an interesting parallel with the field of public-interest law. In the 1960s, there was no such thing as public-interest law. The field was deliberately created, funded by organizations like the Ford Foundation. They financed legal aid clinics at universities, so students could learn housing, discrimination, or immigration law. They funded fellowships at organizations like the ACLU and the NAACP. They created a world where public-interest law is valued, where all the partners at major law firms are expected to have done some public-interest work. Today, when the ACLU advertises for a staff attorney, paying one-third to one-tenth normal salary, it gets hundreds of applicants. Today, 20% of Harvard Law School graduates go into public-interest law, and the school has soul-searching seminars because that percentage is so low. Meanwhile, the percentage of computer-science graduates going into public-interest work is basically zero.

This is bigger than computer security. Technology now permeates society in a way it didn’t just a couple of decades ago, and governments move too slowly to take this into account. That means technologists now are relevant to all sorts of areas that they had no traditional connection to: climate change, food safety, future of work, public health, bioengineering.

More generally, technologists need to understand the policy ramifications of their work. There’s a pervasive myth in Silicon Valley that technology is politically neutral. It’s not, and I hope most people reading this today knows that. We built a world where programmers felt they had an inherent right to code the world as they saw fit. We were allowed to do this because, until recently, it didn’t matter. Now, too many issues are being decided in an unregulated capitalist environment where significant social costs are too often not taken into account.

This is where the core issues of society lie. The defining political question of the 20th century was: “What should be governed by the state, and what should be governed by the market?” This defined the difference between East and West, and the difference between political parties within countries. The defining political question of the first half of the 21st century is: “How much of our lives should be governed by technology, and under what terms?” In the last century, economists drove public policy. In this century, it will be technologists.

The future is coming faster than our current set of policy tools can deal with. The only way to fix this is to develop a new set of policy tools with the help of technologists. We need to be in all aspects of public-interest work, from informing policy to creating tools all building the future. The world needs all of our help.

This essay previously appeared in the January/February issue of IEEE Security & Privacy.

Together with the Ford Foundation, I am hosting a one-day mini-track on public-interest technologists at the RSA Conference this week on Thursday. We’ve had some press coverage.

Edited to Add (3/7): More news articles.

Evidence for the Security of PKCS #1 Digital Signatures

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/09/evidence_for_th.html

This is interesting research: “On the Security of the PKCS#1 v1.5 Signature Scheme“:

Abstract: The RSA PKCS#1 v1.5 signature algorithm is the most widely used digital signature scheme in practice. Its two main strengths are its extreme simplicity, which makes it very easy to implement, and that verification of signatures is significantly faster than for DSA or ECDSA. Despite the huge practical importance of RSA PKCS#1 v1.5 signatures, providing formal evidence for their security based on plausible cryptographic hardness assumptions has turned out to be very difficult. Therefore the most recent version of PKCS#1 (RFC 8017) even recommends a replacement the more complex and less efficient scheme RSA-PSS, as it is provably secure and therefore considered more robust. The main obstacle is that RSA PKCS#1 v1.5 signatures use a deterministic padding scheme, which makes standard proof techniques not applicable.

We introduce a new technique that enables the first security proof for RSA-PKCS#1 v1.5 signatures. We prove full existential unforgeability against adaptive chosen-message attacks (EUF-CMA) under the standard RSA assumption. Furthermore, we give a tight proof under the Phi-Hiding assumption. These proofs are in the random oracle model and the parameters deviate slightly from the standard use, because we require a larger output length of the hash function. However, we also show how RSA-PKCS#1 v1.5 signatures can be instantiated in practice such that our security proofs apply.

In order to draw a more complete picture of the precise security of RSA PKCS#1 v1.5 signatures, we also give security proofs in the standard model, but with respect to weaker attacker models (key-only attacks) and based on known complexity assumptions. The main conclusion of our work is that from a provable security perspective RSA PKCS#1 v1.5 can be safely used, if the output length of the hash function is chosen appropriately.

I don’t think the protocol is “provably secure,” meaning that it cannot have any vulnerabilities. What this paper demonstrates is that there are no vulnerabilities under the model of the proof. And, more importantly, that PKCS #1 v1.5 is as secure as any of its successors like RSA-PSS and RSA Full-Domain.

New Findings About Prime Number Distribution Almost Certainly Irrelevant to Cryptography

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/09/new_findings_ab.html

Lots of people are e-mailing me about this new result on the distribution of prime numbers. While interesting, it has nothing to do with cryptography. Cryptographers aren’t interested in how to find prime numbers, or even in the distribution of prime numbers. Public-key cryptography algorithms like RSA get their security from the difficulty of factoring large composite numbers that are the product of two prime numbers. That’s completely different.

Quantum Computing and Cryptography

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/09/quantum_computi_2.html

Quantum computing is a new way of computing — one that could allow humankind to perform computations that are simply impossible using today’s computing technologies. It allows for very fast searching, something that would break some of the encryption algorithms we use today. And it allows us to easily factor large numbers, something that would break the RSA cryptosystem for any key length.

This is why cryptographers are hard at work designing and analyzing “quantum-resistant” public-key algorithms. Currently, quantum computing is too nascent for cryptographers to be sure of what is secure and what isn’t. But even assuming aliens have developed the technology to its full potential, quantum computing doesn’t spell the end of the world for cryptography. Symmetric cryptography is easy to make quantum-resistant, and we’re working on quantum-resistant public-key algorithms. If public-key cryptography ends up being a temporary anomaly based on our mathematical knowledge and computational ability, we’ll still survive. And if some inconceivable alien technology can break all of cryptography, we still can have secrecy based on information theory — albeit with significant loss of capability.

At its core, cryptography relies on the mathematical quirk that some things are easier to do than to undo. Just as it’s easier to smash a plate than to glue all the pieces back together, it’s much easier to multiply two prime numbers together to obtain one large number than it is to factor that large number back into two prime numbers. Asymmetries of this kind — one-way functions and trap-door one-way functions — underlie all of cryptography.

To encrypt a message, we combine it with a key to form ciphertext. Without the key, reversing the process is more difficult. Not just a little more difficult, but astronomically more difficult. Modern encryption algorithms are so fast that they can secure your entire hard drive without any noticeable slowdown, but that encryption can’t be broken before the heat death of the universe.

With symmetric cryptography — the kind used to encrypt messages, files, and drives — that imbalance is exponential, and is amplified as the keys get larger. Adding one bit of key increases the complexity of encryption by less than a percent (I’m hand-waving here) but doubles the cost to break. So a 256-bit key might seem only twice as complex as a 128-bit key, but (with our current knowledge of mathematics) it’s 340,282,366,920,938,463,463,374,607,431,768,211,456 times harder to break.

Public-key encryption (used primarily for key exchange) and digital signatures are more complicated. Because they rely on hard mathematical problems like factoring, there are more potential tricks to reverse them. So you’ll see key lengths of 2,048 bits for RSA, and 384 bits for algorithms based on elliptic curves. Here again, though, the costs to reverse the algorithms with these key lengths are beyond the current reach of humankind.

This one-wayness is based on our mathematical knowledge. When you hear about a cryptographer “breaking” an algorithm, what happened is that they’ve found a new trick that makes reversing easier. Cryptographers discover new tricks all the time, which is why we tend to use key lengths that are longer than strictly necessary. This is true for both symmetric and public-key algorithms; we’re trying to future-proof them.

Quantum computers promise to upend a lot of this. Because of the way they work, they excel at the sorts of computations necessary to reverse these one-way functions. For symmetric cryptography, this isn’t too bad. Grover’s algorithm shows that a quantum computer speeds up these attacks to effectively halve the key length. This would mean that a 256-bit key is as strong against a quantum computer as a 128-bit key is against a conventional computer; both are secure for the foreseeable future.

For public-key cryptography, the results are more dire. Shor’s algorithm can easily break all of the commonly used public-key algorithms based on both factoring and the discrete logarithm problem. Doubling the key length increases the difficulty to break by a factor of eight. That’s not enough of a sustainable edge.

There are a lot of caveats to those two paragraphs, the biggest of which is that quantum computers capable of doing anything like this don’t currently exist, and no one knows when — or even if ­- we’ll be able to build one. We also don’t know what sorts of practical difficulties will arise when we try to implement Grover’s or Shor’s algorithms for anything but toy key sizes. (Error correction on a quantum computer could easily be an unsurmountable problem.) On the other hand, we don’t know what other techniques will be discovered once people start working with actual quantum computers. My bet is that we will overcome the engineering challenges, and that there will be many advances and new techniques­but they’re going to take time to discover and invent. Just as it took decades for us to get supercomputers in our pockets, it will take decades to work through all the engineering problems necessary to build large-enough quantum computers.

In the short term, cryptographers are putting considerable effort into designing and analyzing quantum-resistant algorithms, and those are likely to remain secure for decades. This is a necessarily slow process, as both good cryptanalysis transitioning standards take time. Luckily, we have time. Practical quantum computing seems to always remain “ten years in the future,” which means no one has any idea.

After that, though, there is always the possibility that those algorithms will fall to aliens with better quantum techniques. I am less worried about symmetric cryptography, where Grover’s algorithm is basically an upper limit on quantum improvements, than I am about public-key algorithms based on number theory, which feel more fragile. It’s possible that quantum computers will someday break all of them, even those that today are quantum resistant.

If that happens, we will face a world without strong public-key cryptography. That would be a huge blow to security and would break a lot of stuff we currently do, but we could adapt. In the 1980s, Kerberos was an all-symmetric authentication and encryption system. More recently, the GSM cellular standard does both authentication and key distribution — at scale — with only symmetric cryptography. Yes, those systems have centralized points of trust and failure, but it’s possible to design other systems that use both secret splitting and secret sharing to minimize that risk. (Imagine that a pair of communicants get a piece of their session key from each of five different key servers.) The ubiquity of communications also makes things easier today. We can use out-of-band protocols where, for example, your phone helps you create a key for your computer. We can use in-person registration for added security, maybe at the store where you buy your smartphone or initialize your Internet service. Advances in hardware may also help to secure keys in this world. I’m not trying to design anything here, only to point out that there are many design possibilities. We know that cryptography is all about trust, and we have a lot more techniques to manage trust than we did in the early years of the Internet. Some important properties like forward secrecy will be blunted and far more complex, but as long as symmetric cryptography still works, we’ll still have security.

It’s a weird future. Maybe the whole idea of number theory­-based encryption, which is what our modern public-key systems are, is a temporary detour based on our incomplete model of computing. Now that our model has expanded to include quantum computing, we might end up back to where we were in the late 1970s and early 1980s: symmetric cryptography, code-based cryptography, Merkle hash signatures. That would be both amusing and ironic.

Yes, I know that quantum key distribution is a potential replacement for public-key cryptography. But come on — does anyone expect a system that requires specialized communications hardware and cables to be useful for anything but niche applications? The future is mobile, always-on, embedded computing devices. Any security for those will necessarily be software only.

There’s one more future scenario to consider, one that doesn’t require a quantum computer. While there are several mathematical theories that underpin the one-wayness we use in cryptography, proving the validity of those theories is in fact one of the great open problems in computer science. Just as it is possible for a smart cryptographer to find a new trick that makes it easier to break a particular algorithm, we might imagine aliens with sufficient mathematical theory to break all encryption algorithms. To us, today, this is ridiculous. Public- key cryptography is all number theory, and potentially vulnerable to more mathematically inclined aliens. Symmetric cryptography is so much nonlinear muddle, so easy to make more complex, and so easy to increase key length, that this future is unimaginable. Consider an AES variant with a 512-bit block and key size, and 128 rounds. Unless mathematics is fundamentally different than our current understanding, that’ll be secure until computers are made of something other than matter and occupy something other than space.

But if the unimaginable happens, that would leave us with cryptography based solely on information theory: one-time pads and their variants. This would be a huge blow to security. One-time pads might be theoretically secure, but in practical terms they are unusable for anything other than specialized niche applications. Today, only crackpots try to build general-use systems based on one-time pads — and cryptographers laugh at them, because they replace algorithm design problems (easy) with key management and physical security problems (much, much harder). In our alien-ridden science-fiction future, we might have nothing else.

Against these godlike aliens, cryptography will be the only technology we can be sure of. Our nukes might refuse to detonate and our fighter jets might fall out of the sky, but we will still be able to communicate securely using one-time pads. There’s an optimism in that.

This essay originally appeared in IEEE Security and Privacy.

Build your own weather station with our new guide!

Post Syndicated from Richard Hayler original https://www.raspberrypi.org/blog/build-your-own-weather-station/

One of the most common enquiries I receive at Pi Towers is “How can I get my hands on a Raspberry Pi Oracle Weather Station?” Now the answer is: “Why not build your own version using our guide?”

Build Your Own weather station kit assembled

Tadaaaa! The BYO weather station fully assembled.

Our Oracle Weather Station

In 2016 we sent out nearly 1000 Raspberry Pi Oracle Weather Station kits to schools from around the world who had applied to be part of our weather station programme. In the original kit was a special HAT that allows the Pi to collect weather data with a set of sensors.

The original Raspberry Pi Oracle Weather Station HAT – Build Your Own Raspberry Pi weather station

The original Raspberry Pi Oracle Weather Station HAT

We designed the HAT to enable students to create their own weather stations and mount them at their schools. As part of the programme, we also provide an ever-growing range of supporting resources. We’ve seen Oracle Weather Stations in great locations with a huge differences in climate, and they’ve even recorded the effects of a solar eclipse.

Our new BYO weather station guide

We only had a single batch of HATs made, and unfortunately we’ve given nearly* all the Weather Station kits away. Not only are the kits really popular, we also receive lots of questions about how to add extra sensors or how to take more precise measurements of a particular weather phenomenon. So today, to satisfy your demand for a hackable weather station, we’re launching our Build your own weather station guide!

Build Your Own Raspberry Pi weather station

Fun with meteorological experiments!

Our guide suggests the use of many of the sensors from the Oracle Weather Station kit, so can build a station that’s as close as possible to the original. As you know, the Raspberry Pi is incredibly versatile, and we’ve made it easy to hack the design in case you want to use different sensors.

Many other tutorials for Pi-powered weather stations don’t explain how the various sensors work or how to store your data. Ours goes into more detail. It shows you how to put together a breadboard prototype, it describes how to write Python code to take readings in different ways, and it guides you through recording these readings in a database.

Build Your Own Raspberry Pi weather station on a breadboard

There’s also a section on how to make your station weatherproof. And in case you want to move past the breadboard stage, we also help you with that. The guide shows you how to solder together all the components, similar to the original Oracle Weather Station HAT.

Who should try this build

We think this is a great project to tackle at home, at a STEM club, Scout group, or CoderDojo, and we’re sure that many of you will be chomping at the bit to get started. Before you do, please note that we’ve designed the build to be as straight-forward as possible, but it’s still fairly advanced both in terms of electronics and programming. You should read through the whole guide before purchasing any components.

Build Your Own Raspberry Pi weather station – components

The sensors and components we’re suggesting balance cost, accuracy, and easy of use. Depending on what you want to use your station for, you may wish to use different components. Similarly, the final soldered design in the guide may not be the most elegant, but we think it is achievable for someone with modest soldering experience and basic equipment.

You can build a functioning weather station without soldering with our guide, but the build will be more durable if you do solder it. If you’ve never tried soldering before, that’s OK: we have a Getting started with soldering resource plus video tutorial that will walk you through how it works step by step.

Prototyping HAT for Raspberry Pi weather station sensors

For those of you who are more experienced makers, there are plenty of different ways to put the final build together. We always like to hear about alternative builds, so please post your designs in the Weather Station forum.

Our plans for the guide

Our next step is publishing supplementary guides for adding extra functionality to your weather station. We’d love to hear which enhancements you would most like to see! Our current ideas under development include adding a webcam, making a tweeting weather station, adding a light/UV meter, and incorporating a lightning sensor. Let us know which of these is your favourite, or suggest your own amazing ideas in the comments!

*We do have a very small number of kits reserved for interesting projects or locations: a particularly cool experiment, a novel idea for how the Oracle Weather Station could be used, or places with specific weather phenomena. If have such a project in mind, please send a brief outline to [email protected], and we’ll consider how we might be able to help you.

The post Build your own weather station with our new guide! appeared first on Raspberry Pi.