Tag Archives: keys

Relay Attack against Teslas

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/09/relay-attack-against-teslas.html

Nice work:

Radio relay attacks are technically complicated to execute, but conceptually easy to understand: attackers simply extend the range of your existing key using what is essentially a high-tech walkie-talkie. One thief stands near you while you’re in the grocery store, intercepting your key’s transmitted signal with a radio transceiver. Another stands near your car, with another transceiver, taking the signal from their friend and passing it on to the car. Since the car and the key can now talk, through the thieves’ range extenders, the car has no reason to suspect the key isn’t inside—and fires right up.

But Tesla’s credit card keys, like many digital keys stored in cell phones, don’t work via radio. Instead, they rely on a different protocol called Near Field Communication or NFC. Those keys had previously been seen as more secure, since their range is so limited and their handshakes with cars are more complex.

Now, researchers seem to have cracked the code. By reverse-engineering the communications between a Tesla Model Y and its credit card key, they were able to properly execute a range-extending relay attack against the crossover. While this specific use case focuses on Tesla, it’s a proof of concept—NFC handshakes can, and eventually will, be reverse-engineered.

Hyundai Uses Example Keys for Encryption System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/08/hyundai-uses-example-keys-for-encryption-system.html

This is a dumb crypto mistake I had not previously encountered:

A developer says it was possible to run their own software on the car infotainment hardware after discovering the vehicle’s manufacturer had secured its system using keys that were not only publicly known but had been lifted from programming examples.

[…]

“Turns out the [AES] encryption key in that script is the first AES 128-bit CBC example key listed in the NIST document SP800-38A [PDF]”.

[…]

Luck held out, in a way. “Greenluigi1” found within the firmware image the RSA public key used by the updater, and searched online for a portion of that key. The search results pointed to a common public key that shows up in online tutorials like “RSA Encryption & Decryption Example with OpenSSL in C.

EDITED TO ADD (8/23): Slashdot post.

A Taxonomy of Access Control

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/08/a-taxonomy-of-access-control.html

My personal definition of a brilliant idea is one that is immediately obvious once it’s explained, but no one has thought of it before. I can’t believe that no one has described this taxonomy of access control before Ittay Eyal laid it out in this paper. The paper is about cryptocurrency wallet design, but the ideas are more general. Ittay points out that a key—or an account, or anything similar—can be in one of four states:

safe Only the user has access,
loss No one has access,
leak Both the user and the adversary have access, or
theft Only the adversary has access.

Once you know these states, you can assign probabilities of transitioning from one state to another (someone hacks your account and locks you out, you forgot your own password, etc.) and then build optimal security and reliability to deal with it. It’s a truly elegant way of conceptualizing the problem.

Security Vulnerabilities in Honda’s Keyless Entry System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/07/security-vulnerabilities-in-hondas-keyless-entry-system.html

Honda vehicles from 2021 to 2022 are vulnerable to this attack:

On Thursday, a security researcher who goes by Kevin2600 published a technical report and videos on a vulnerability that he claims allows anyone armed with a simple hardware device to steal the code to unlock Honda vehicles. Kevin2600, who works for cybersecurity firm Star-V Lab, dubbed the attack RollingPWN.

[…]

In a phone call, Kevin2600 explained that the attack relies on a weakness that allows someone using a software defined radio—such as HackRF—to capture the code that the car owner uses to open the car, and then replay it so that the hacker can open the car as well. In some cases, he said, the attack can be performed from 30 meters (approximately 98 feet) away.

In the videos, Kevin2600 and his colleagues show how the attack works by unlocking different models of Honda cars with a device connected to a laptop.

The Honda models that Kevin2600 and his colleagues tested the attack on use a so-called rolling code mechanism, which means that­—in theory­—every time the car owner uses the keyfob, it sends a different code to open it. This should make it impossible to capture the code and use it again. But the researchers found that there is a flaw that allows them to roll back the codes and reuse old codes to open the car, Kevin2600 said.

Hertzbleed: A New Side-Channel Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/06/hertzbleed-a-new-side-channel-attack.html

Hertzbleed is a new side-channel attack that works against a variety of microprocressors. Deducing cryptographic keys by analyzing power consumption has long been an attack, but it’s not generally viable because measuring power consumption is often hard. This new attack measures power consumption by measuring time, making it easier to exploit.

The team discovered that dynamic voltage and frequency scaling (DVFS)—a power and thermal management feature added to every modern CPU—allows attackers to deduce the changes in power consumption by monitoring the time it takes for a server to respond to specific carefully made queries. The discovery greatly reduces what’s required. With an understanding of how the DVFS feature works, power side-channel attacks become much simpler timing attacks that can be done remotely.

The researchers have dubbed their attack Hertzbleed because it uses the insights into DVFS to expose­or bleed out­data that’s expected to remain private.

[…]

The researchers have already shown how the exploit technique they developed can be used to extract an encryption key from a server running SIKE, a cryptographic algorithm used to establish a secret key between two parties over an otherwise insecure communications channel.

The researchers said they successfully reproduced their attack on Intel CPUs from the 8th to the 11th generation of the Core microarchitecture. They also claimed that the technique would work on Intel Xeon CPUs and verified that AMD Ryzen processors are vulnerable and enabled the same SIKE attack used against Intel chips. The researchers believe chips from other manufacturers may also be affected.

Hacking Tesla’s Remote Key Cards

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/06/hacking-teslas-remote-key-cards.html

Interesting vulnerability in Tesla’s NFC key cards:

Martin Herfurt, a security researcher in Austria, quickly noticed something odd about the new feature: Not only did it allow the car to automatically start within 130 seconds of being unlocked with the NFC card, but it also put the car in a state to accept entirely new keys—with no authentication required and zero indication given by the in-car display.

“The authorization given in the 130-second interval is too general… [it’s] not only for drive,” Herfurt said in an online interview. “This timer has been introduced by Tesla…in order to make the use of the NFC card as a primary means of using the car more convenient. What should happen is that the car can be started and driven without the user having to use the key card a second time. The problem: within the 130-second period, not only the driving of the car is authorized, but also the [enrolling] of a new key.”

Breaking RSA through Insufficiently Random Primes

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/03/breaking-rsa-through-insufficiently-random-primes.html

Basically, the SafeZone library doesn’t sufficiently randomize the two prime numbers it used to generate RSA keys. They’re too close to each other, which makes them vulnerable to recovery.

There aren’t many weak keys out there, but there are some:

So far, Böck has identified only a handful of keys in the wild that are vulnerable to the factorization attack. Some of the keys are from printers from two manufacturers, Canon and Fujifilm (originally branded as Fuji Xerox). Printer users can use the keys to generate a Certificate Signing Request. The creation date for the all the weak keys was 2020 or later. The weak Canon keys are tracked as CVE-2022-26351.

Böck also found four vulnerable PGP keys, typically used to encrypt email, on SKS PGP key servers. A user ID tied to the keys implied they were created for testing, so he doesn’t believe they’re in active use.

DiceKeys

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/dicekeys.html

DiceKeys is a physical mechanism for creating and storing a 192-bit key. The idea is that you roll a special set of twenty-five dice, put them into a plastic jig, and then use an app to convert those dice into a key. You can then use that key for a variety of purposes, and regenerate it from the dice if you need to.

This week Stuart Schechter, a computer scientist at the University of California, Berkeley, is launching DiceKeys, a simple kit for physically generating a single super-secure key that can serve as the basis for creating all the most important passwords in your life for years or even decades to come. With little more than a plastic contraption that looks a bit like a Boggle set and an accompanying web app to scan the resulting dice roll, DiceKeys creates a highly random, mathematically unguessable key. You can then use that key to derive master passwords for password managers, as the seed to create a U2F key for two-factor authentication, or even as the secret key for cryptocurrency wallets. Perhaps most importantly, the box of dice is designed to serve as a permanent, offline key to regenerate that master password, crypto key, or U2F token if it gets lost, forgotten, or broken.

[…]

Schechter is also building a separate app that will integrate with DiceKeys to allow users to write a DiceKeys-generated key to their U2F two-factor authentication token. Currently the app works only with the open-source SoloKey U2F token, but Schechter hopes to expand it to be compatible with more commonly used U2F tokens before DiceKeys ship out. The same API that allows that integration with his U2F token app will also allow cryptocurrency wallet developers to integrate their wallets with DiceKeys, so that with a compatible wallet app, DiceKeys can generate the cryptographic key that protects your crypto coins too.

Here’s the DiceKeys website and app. Here’s a short video demo. Here’s a longer SOUPS talk.

Preorder a set here.

Note: I am an adviser on the project.

Another news article. Slashdot thread. Hacker News thread. Reddit thread.

Copying a Key by Listening to It in Action

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/copying_a_key_b.html

Researchers are using recordings of keys being used in locks to create copies.

Once they have a key-insertion audio file, SpiKey’s inference software gets to work filtering the signal to reveal the strong, metallic clicks as key ridges hit the lock’s pins [and you can hear those filtered clicks online here]. These clicks are vital to the inference analysis: the time between them allows the SpiKey software to compute the key’s inter-ridge distances and what locksmiths call the “bitting depth” of those ridges: basically, how deeply they cut into the key shaft, or where they plateau out. If a key is inserted at a nonconstant speed, the analysis can be ruined, but the software can compensate for small speed variations.

The result of all this is that SpiKey software outputs the three most likely key designs that will fit the lock used in the audio file, reducing the potential search space from 330,000 keys to just three. “Given that the profile of the key is publicly available for commonly used [pin-tumbler lock] keys, we can 3D-print the keys for the inferred bitting codes, one of which will unlock the door,” says Ramesh.

How to lower costs by automatically deleting and recreating HSMs

Post Syndicated from David Ogunmola original https://aws.amazon.com/blogs/security/how-to-lower-costs-by-automatically-deleting-and-recreating-hsms/

You can use AWS CloudHSM to help manage your encryption keys on FIPS 140-2 Level 3 validated hardware security modules (HSMs). AWS recommends running a high-availability production architecture with at least two CloudHSM HSMs in different Availability Zones. Although many workloads must be available 24/7, quality assurance or development environments typically do not have this requirement.

In this post, we show you how to automate the deletion and recreation of HSMs when you do not have a requirement for high availability. Using this approach of deleting HSMs and restoring them from backups on a predefined schedule can help lower your monthly CloudHSM costs. For more information on the CloudHSM backup process, see the CloudHSM cluster backup documentation.

Prerequisites

Solution overview

For this solution, you use the following AWS services to automate the process of deleting and restoring HSMs running non-production workloads:

 

Figure 1: Architectural diagram

Figure 1: Architectural diagram

Here’s how the process works, as shown in Figure 1:

  1. At the scheduled time (we are using 7:30 PM UTC in this example), a CloudWatch Events rule triggers the DeleteHSM Lambda function.
  2. The DeleteHSM Lambda function stores the HSM metadata, such as IP address and Availability Zone, in a DynamoDB table, deletes the HSMs from the cluster, and sends an email notification.
  3. At the scheduled time (we are using 7:30 AM UTC in this example), another CloudWatch Events rule triggers the AddHSM Lambda function.
  4. The AddHSM Lambda function retrieves the HSM metadata in the DynamoDB table, creates the HSMs into the cluster with the same IP address and Availability Zone, and sends an email notification.

Note: In this solution, we use the same IP address when creating a new HSM, so you don’t need to modify the configuration files for the CloudHSM client instance connecting to the HSM.

Deployment steps

  1. To open the CloudFormation template, select the Launch Stack button below.

    Select button to launch stack

  2. Give your stack a name.
  3. Under Parameters, enter values for the following parameters based on your requirements:
    • ClusterId: The ID of the existing CloudHSM cluster that you wish to use.
    • CreateTime: The time when your HSMs should be created in UTC. This parameter must be a valid cron expression. The CreateTime shown in Figure 2 is 7:30 UTC Mon-Fri.
    • DeleteTime: The time when your HSMs should be deleted in UTC. This parameter must be a valid cron expression. The DeleteTime shown in Figure 2 is 19:30 UTC Mon-Fri.
    • EmailAddress: The email address that will be subscribed to the SNS topic for creation and deletion events for your HSMs.

     

    Figure 2: Specify the parameters in CloudHSM

    Figure 2: Specify the parameters in CloudHSM

  4. On the Specify stack details page, select Next, and then, on the Configure stack options page, select Next.
  5. On the Review page, check the box that says I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create stack, as shown in Figure 3.

    Figure 3: Check the box to acknowledge the conditions

    Figure 3: Check the box to acknowledge the conditions

  6. After you create the stack, the CloudFormation template automatically creates an SNS topic that notifies you when HSMs are deleted or created in your cluster. You must subscribe to this topic so you can receive the alerts. Select Confirm subscription, as shown in Figure 4.

    Figure 4: Select ‘Confirm subscription’ to subscribe to the SNS topic

    Figure 4: Select ‘Confirm subscription’ to subscribe to the SNS topic

AWS resources deployed by the CloudFormation stack

When stack creation is complete, the template will have deployed the following AWS resources:

  1. An AWS Identity and Access Management (IAM) role named ClusterExecutionRole for the Lambda functions to use on each invocation. The IAM role uses the managed policy AWSLambdaBasicExecutionRole and an inline policy called CloudHSMPermissions. The managed policy provides the Lambda execution role permission to write CloudWatch Logs. The inline policy grants the role permission to delete an HSM, create an HSM, publish to an SNS topic, create a DynamoDB table, put items into the table, and retrieve items from the table. To learn more about CloudHSM permissions, see Predefined AWS Managed Policies for AWS CloudHSM. The CloudHSMPermissions inline policy is shown below.

    Note: The resource names in the policy shown below are examples. The template will update them to match resources in your account when the solution is deployed.

    
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "DynamoDBPermissions",
          "Effect": "Allow",
          "Action": [
            "dynamodb:GetItem",
            "dynamodb:PutItem",
            "dynamodb:Query"
          ],
          "Resource": [
            "arn:aws:dynamodb:us-west-1:111122223333:table/DynamoDBTable-blogpost",
            "arn:aws:dynamodb:us-west-1:111122223333:table/DynamoDBTable-blogpost/*"
          ]
        },
        {
          "Sid": "CloudHSMPerClusterPermissions",
          "Effect": "Allow",
          "Action": [
            "cloudhsm:DeleteHsm",
            "cloudhsm:CreateHsm"
          ],
          "Resource": "arn:aws:cloudhsm:us-west-1:111122223333:cluster/cluster-id"
        },
        {
          "Sid": "DescribeCreatePermissions",
          "Effect": "Allow",
          "Action": [
            "cloudhsm:DescribeClusters",
            "ec2:DeleteNetworkInterface",
            "ec2:CreateNetworkInterface",
            "ec2:AuthorizeSecurityGroupIngress",
            "ec2:AuthorizeSecurityGroupEgress",
            "ec2:RevokeSecurityGroupEgress",
            "ec2:CreateSecurityGroup",
            "ec2:DescribeNetworkInterfaces",
            "ec2:DescribeSubnets",
            "ec2:DescribeSecurityGroups"
          ],
          "Resource": "*",
          "Condition": {
            "StringEquals": {
              "aws: RequestedRegion": "us-west-1"
            }
          }
        },
        {
          "Sid": "SNSPermission",
          "Effect": "Allow",
          "Action": "sns: Publish",
          "Resource": "arn:aws:sns:us-west-1:111122223333:SNSTopicName"
        }
      ]
    }
    

  2. Two CloudWatch Events rules: one rule for creating the HSMs and one rule for deleting the HSMs. These rules are triggered at the times that you specified during creation of the stack.
  3. Two Lambda functions: AddHSM and DeleteHSM. You’ll learn more about what each function does in the following section.
  4. An SNS topic that notifies you when HSMs have been created or deleted.

Detailed walkthrough of the Lambda functions

We’ll walk you through the code in the Lambda functions that get triggered to delete and recreate the HSMs at scheduled intervals.

Delete CloudHSMs Lambda function

At the scheduled time (7:30 PM UTC in this example), the ScheduledRuleDeleteHSM CloudWatch Events event is triggered.

As shown below, the code for the DeleteHSM Lambda function first passes the cluster ID, table name, and region as environment variables based on the parameters you entered when creating the CloudFormation stack. At the scheduled time, the CloudWatch Events rule triggers the Lambda function, which in turn creates a DynamoDB table the first time the DeleteHSM Lambda function is triggered.

The code then makes a Describe API call with the cluster ID provided in the CloudFormation stack to retrieve the HSM details, such as the HSM IP and Availability Zones. These items are saved to the DynamoDB table for later use, and the HSMs in the cluster are deleted. All recipients subscribed to the SNS topic receive an SNS notification showing the details of the HSMs deleted from the cluster.


import boto3, sys, time
from datetime import datetime

from botocore.exceptions import ClientError
from os import environ

ClusterId = environ.get('ClusterId')
TableName = environ.get('TableName')
TopicARN = environ.get('TopicARN')

def lambda_handler(event, context):
    cloudhsm_client = boto3.client('cloudhsmv2')
    dynamodb_resource = boto3.resource('dynamodb')
    try:
        response = cloudhsm_client.describe_clusters(Filters={'clusterIds':[ClusterId]})
        for item in response['Clusters'][0]['Hsms']:
            if item['State'] == 'ACTIVE' and item['ClusterId'] == ClusterId:
                table = dynamodb_resource.Table(TableName)
                table.put_item(Item={ 'ClusterId': item['ClusterId'],'AvailabilityZone': item['AvailabilityZone'], 'IpAddress': item['EniIp'],})
                print(item['AvailabilityZone'])
                print(item['ClusterId'])
                print(item['EniIp'])
                print(item['State'])
            else:
                print('HSMs in Cluster {} not in ACTIVE State'.format(ClusterId))
    except Exception as e:
        print (e)
        sys.exit(1)
    time.sleep(5)

    response = cloudhsm_client.describe_clusters(Filters={'clusterIds':[ClusterId]})
    for item in response['Clusters'][0]['Hsms']:
            if item['State'] == 'ACTIVE' and item['ClusterId'] == ClusterId:
                print('Deleting HSMs {0} in Cluster {1}'.format(item['EniIp'], item['ClusterId']))
                response = cloudhsm_client.delete_hsm(ClusterId=item['ClusterId'], EniIp=item['EniIp'])
                try:
                    sns_client = boto3.client('sns')
                    message_subject = '[{timestamp}] Deleting HSMs From Cluster!'.format(timestamp=datetime.now().strftime('%b/%d/%Y %H:%M'))
                    message_body = 'These are the details of the Deleted HSM:\n\nCluster ID: {ClusterId}\n\nHSM IP: {EniIp}\n\nAvailability Zone: {AvailabilityZone}\n\n'.format(**item)
                    print('Sending SNS notification...')
                    sns_response = sns_client.publish(TopicArn=TopicARN, Message=message_body, Subject=message_subject)
                except Exception as e:
                    print('Exception: %s' % e)
            else:
                print('HSMs in Cluster {} not in Active State'.format(ClusterId))

Create CloudHSMs Lambda function

At the scheduled time (7:30 AM UTC in this example), the AddHSM Lambda function is triggered by the CloudWatch Events rule ScheduledRuleAddHSM to add the HSMs back to the cluster. The code shown below retrieves the CloudHSM details, including the CloudHSM IP and Availability Zone, from the DynamoDB table. Next, the CloudHSMs are created with the same IP address into the same Availability Zone. This saves you the effort of having to make configuration changes on the CloudHSM client instances connecting to the HSM because the same IPs are used. The CloudHSM daemon installed on the client instance reconnects automatically to the HSMs immediately as they become active. All recipients subscribed to the SNS topic receive an SNS notification showing the details of the HSMs created.


import boto3
from datetime import datetime
from os import environ
from boto3.dynamodb.conditions import Key

ClusterId = environ.get('ClusterId')
TableName = environ.get('TableName')
TopicARN = environ.get('TopicARN')

def lambda_handler(event, context):
    dynamodb_resource = boto3.resource('dynamodb')
    table = dynamodb_resource.Table(TableName)
    resp = table.query(KeyConditionExpression=Key('ClusterId').eq(ClusterId))
    for item in resp['Items']:
        try:
           cloudhsm_client = boto3.client('cloudhsmv2')
           response = cloudhsm_client.create_hsm(ClusterId=ClusterId,AvailabilityZone=item['AvailabilityZone'],IpAddress=item['IpAddress'])
           try:
               sns_client = boto3.client('sns')
               message_subject = '[{timestamp}] Adding HSMs To Cluster!'.format(timestamp=datetime.now().strftime('%b/%d/%Y %H:%M'))
               message_body = 'These are the details of the Newly Created HSM:\n\nCluster ID: {ClusterId}\n\nHSM IP: {IpAddress}\n\nAvailability Zone: {AvailabilityZone}\n\n'.format(**item)
               print('Sending SNS notification...')
               sns_response = sns_client.publish(TopicArn=TopicARN, Message=message_body, Subject=message_subject)

           except Exception as e:
                 print('Exception: %s' % e)
        except:
            print('Failure Adding HSM to Cluster {}'.format(ClusterId))

Note: If you decide to schedule the stop and start of your client instances as described above, you must ensure that the CloudHSM client daemon automatically starts running when the instance(s) boot up so the connection to your CloudHSM cluster will resume.

Conclusion

In this post, you learned an approach to help lower your monthly CloudHSM costs for environments that don’t need to be running 24/7. You learned how to achieve this cost savings by using scheduled CloudWatch Events rules to trigger Lambda functions that delete and recreate the CloudHSMs in your cluster on a specified schedule without modifying client configuration files.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS CloudHSM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

David Ogunmola

David is a Security Engineer at AWS. He enjoys the culture at Amazon because it aligns with his dedication to lifelong learning. He holds an MS in Cyber Security from the University of Nebraska. Outside of work, he loves watching soccer and experiencing new cultures.

Author

Gabriel Santamaria

Gabriel is a Senior Technical Account Manager at AWS. He holds an MS in Information Technology from George Mason University, as well as the AWS Solutions Architect Professional, DevOps Professional, and Security Specialist certifications. In his free time he enjoys spending time with his family catching up on the latest TV shows and is an avid fan of board games.

Bank Card "Master Key" Stolen

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/bank_card_maste.html

South Africa’s Postbank experienced a catastrophic security failure. The bank’s master PIN key was stolen, forcing it to cancel and replace 12 million bank cards.

The breach resulted from the printing of the bank’s encrypted master key in plain, unencrypted digital language at the Postbank’s old data centre in the Pretoria city centre.

According to a number of internal Postbank reports, which the Sunday Times obtained, the master key was then stolen by employees.

One of the reports said that the cards would cost about R1bn to replace. The master key, a 36-digit code, allows anyone who has it to gain unfettered access to the bank’s systems, and allows them to read and rewrite account balances, and change information and data on any of the bank’s 12-million cards.

The bank lost $3.2 million in fraudulent transactions before the theft was discovered. Replacing all the cards will cost an estimated $58 million.

Another Intel Speculative Execution Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/another_intel_s.html

Remember Spectre and Meltdown? Back in early 2018, I wrote:

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they — and the research into the Intel ME vulnerability — have shown researchers where to look, more is coming — and what they’ll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

That has turned out to be true. Here’s a new vulnerability:

On Tuesday, two separate academic teams disclosed two new and distinctive exploits that pierce Intel’s Software Guard eXtension, by far the most sensitive region of the company’s processors.

[…]

The new SGX attacks are known as SGAxe and CrossTalk. Both break into the fortified CPU region using separate side-channel attacks, a class of hack that infers sensitive data by measuring timing differences, power consumption, electromagnetic radiation, sound, or other information from the systems that store it. The assumptions for both attacks are roughly the same. An attacker has already broken the security of the target machine through a software exploit or a malicious virtual machine that compromises the integrity of the system. While that’s a tall bar, it’s precisely the scenario that SGX is supposed to defend against.

Another news article.

Securing Internet Videoconferencing Apps: Zoom and Others

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/secure_internet.html

The NSA just published a survey of video conferencing apps. So did Mozilla.

Zoom is on the good list, with some caveats. The company has done a lot of work addressing previous security concerns. It still has a bit to go on end-to-end encryption. Matthew Green looked at this. Zoom does offer end-to-end encryption if 1) everyone is using a Zoom app, and not logging in to the meeting using a webpage, and 2) the meeting is not being recorded in the cloud. That’s pretty good, but the real worry is where the encryption keys are generated and stored. According to Citizen Lab, the company generates them.

The Zoom transport protocol adds Zoom’s own encryption scheme to RTP in an unusual way. By default, all participants’ audio and video in a Zoom meeting appears to be encrypted and decrypted with a single AES-128 key shared amongst the participants. The AES key appears to be generated and distributed to the meeting’s participants by Zoom servers. Zoom’s encryption and decryption use AES in ECB mode, which is well-understood to be a bad idea, because this mode of encryption preserves patterns in the input.

The algorithm part was just fixed:

AES 256-bit GCM encryption: Zoom is upgrading to the AES 256-bit GCM encryption standard, which offers increased protection of your meeting data in transit and resistance against tampering. This provides confidentiality and integrity assurances on your Zoom Meeting, Zoom Video Webinar, and Zoom Phone data. Zoom 5.0, which is slated for release within the week, supports GCM encryption, and this standard will take effect once all accounts are enabled with GCM. System-wide account enablement will take place on May 30.

There is nothing in Zoom’s latest announcement about key management. So: while the company has done a really good job improving the security and privacy of their platform, there seems to be just one step remaining.

Finally — I use Zoom all the time. I finished my Harvard class using Zoom; it’s the university standard. I am having Inrupt company meetings on Zoom. I am having professional and personal conferences on Zoom. It’s what everyone has, and the features are really good.

DNSSEC Keysigning Ceremony Postponed Because of Locked Safe

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/dnssec_keysigni.html

Interesting collision of real-world and Internet security:

The ceremony sees several trusted internet engineers (a minimum of three and up to seven) from across the world descend on one of two secure locations — one in El Segundo, California, just south of Los Angeles, and the other in Culpeper, Virginia — both in America, every three months.

Once in place, they run through a lengthy series of steps and checks to cryptographically sign the digital key pairs used to secure the internet’s root zone. (Here’s Cloudflare‘s in-depth explanation, and IANA’s PDF step-by-step guide.)

[…]

Only specific named people are allowed to take part in the ceremony, and they have to pass through several layers of security — including doors that can only be opened through fingerprint and retinal scans — before getting in the room where the ceremony takes place.

Staff open up two safes, each roughly one-metre across. One contains a hardware security module that contains the private portion of the KSK. The module is activated, allowing the KSK private key to sign keys, using smart cards assigned to the ceremony participants. These credentials are stored in deposit boxes and tamper-proof bags in the second safe. Each step is checked by everyone else, and the event is livestreamed. Once the ceremony is complete — which takes a few hours — all the pieces are separated, sealed, and put back in the safes inside the secure facility, and everyone leaves.

But during what was apparently a check on the system on Tuesday night — the day before the ceremony planned for 1300 PST (2100 UTC) Wednesday — IANA staff discovered that they couldn’t open one of the two safes. One of the locking mechanisms wouldn’t retract and so the safe stayed stubbornly shut.

As soon as they discovered the problem, everyone involved, including those who had flown in for the occasion, were told that the ceremony was being postponed. Thanks to the complexity of the problem — a jammed safe with critical and sensitive equipment inside — they were told it wasn’t going to be possible to hold the ceremony on the back-up date of Thursday, either.

New SHA-1 Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/new_sha-1_attac.html

There’s a new, practical, collision attack against SHA-1:

In this paper, we report the first practical implementation of this attack, and its impact on real-world security with a PGP/GnuPG impersonation attack. We managed to significantly reduce the complexity of collisions attack against SHA-1: on an Nvidia GTX 970, identical-prefix collisions can now be computed with a complexity of 261.2rather than264.7, and chosen-prefix collisions with a complexity of263.4rather than267.1. When renting cheap GPUs, this translates to a cost of 11k US$ for a collision,and 45k US$ for a chosen-prefix collision, within the means of academic researchers.Our actual attack required two months of computations using 900 Nvidia GTX 1060GPUs (we paid 75k US$ because GPU prices were higher, and we wasted some time preparing the attack).

It has practical applications:

We chose the PGP/GnuPG Web of Trust as demonstration of our chosen-prefix collision attack against SHA-1. The Web of Trust is a trust model used for PGP that relies on users signing each other’s identity certificate, instead of using a central PKI. For compatibility reasons the legacy branch of GnuPG (version 1.4) still uses SHA-1 by default for identity certification.

Using our SHA-1 chosen-prefix collision, we have created two PGP keys with different UserIDs and colliding certificates: key B is a legitimate key for Bob (to be signed by the Web of Trust), but the signature can be transferred to key A which is a forged key with Alice’s ID. The signature will still be valid because of the collision, but Bob controls key A with the name of Alice, and signed by a third party. Therefore, he can impersonate Alice and sign any document in her name.

From a news article:

The new attack is significant. While SHA1 has been slowly phased out over the past five years, it remains far from being fully deprecated. It’s still the default hash function for certifying PGP keys in the legacy 1.4 version branch of GnuPG, the open-source successor to PGP application for encrypting email and files. Those SHA1-generated signatures were accepted by the modern GnuPG branch until recently, and were only rejected after the researchers behind the new collision privately reported their results.

Git, the world’s most widely used system for managing software development among multiple people, still relies on SHA1 to ensure data integrity. And many non-Web applications that rely on HTTPS encryption still accept SHA1 certificates. SHA1 is also still allowed for in-protocol signatures in the Transport Layer Security and Secure Shell protocols.

Chrome Extension Stealing Cryptocurrency Keys and Passwords

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/chrome_extensio.html

A malicious Chrome extension surreptitiously steals Ethereum keys and passwords:

According to Denley, the extension is dangerous to users in two ways. First, any funds (ETH coins and ERC0-based tokens) managed directly inside the extension are at risk.

Denley says that the extension sends the private keys of all wallets created or managed through its interface to a third-party website located at erc20wallet[.]tk.

Second, the extension also actively injects malicious JavaScript code when users navigate to five well-known and popular cryptocurrency management platforms. This code steals login credentials and private keys, data that it’s sent to the same erc20wallet[.]tk third-party website.

Another example of how blockchain requires many single points of trust in order to be secure.

TPM-Fail Attacks Against Cryptographic Coprocessors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/11/tpm-fail_attack.html

Really interesting research: TPM-FAIL: TPM meets Timing and Lattice Attacks, by Daniel Moghimi, Berk Sunar, Thomas Eisenbarth, and Nadia Heninger.

Abstract: Trusted Platform Module (TPM) serves as a hardware-based root of trust that protects cryptographic keys from privileged system and physical adversaries. In this work, we per-form a black-box timing analysis of TPM 2.0 devices deployed on commodity computers. Our analysis reveals that some of these devices feature secret-dependent execution times during signature generation based on elliptic curves. In particular, we discovered timing leakage on an Intel firmware-based TPM as well as a hardware TPM. We show how this information allows an attacker to apply lattice techniques to recover 256-bit private keys for ECDSA and ECSchnorr signatures. On Intel fTPM, our key recovery succeeds after about1,300 observations and in less than two minutes. Similarly, we extract the private ECDSA key from a hardware TPM manufactured by STMicroelectronics, which is certified at CommonCriteria (CC) EAL 4+, after fewer than 40,000 observations. We further highlight the impact of these vulnerabilities by demonstrating a remote attack against a StrongSwan IPsecVPN that uses a TPM to generate the digital signatures for authentication. In this attack, the remote client recovers the server’s private authentication key by timing only 45,000 authentication handshakes via a network connection.

The vulnerabilities we have uncovered emphasize the difficulty of correctly implementing known constant-time techniques, and show the importance of evolutionary testing and transparent evaluation of cryptographic implementations.Even certified devices that claim resistance against attacks require additional scrutiny by the community and industry, as we learn more about these attacks.

These are real attacks, and take between 4-20 minutes to extract the key. Intel has a firmware update.

Attack website. News articles. Boing Boing post. Slashdot thread.

NordVPN Breached

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/nordvpn_breache.html

There was a successful attack against NordVPN:

Based on the command log, another of the leaked secret keys appeared to secure a private certificate authority that NordVPN used to issue digital certificates. Those certificates might be issued for other servers in NordVPN’s network or for a variety of other sensitive purposes. The name of the third certificate suggested it could also have been used for many different sensitive purposes, including securing the server that was compromised in the breach.

The revelations came as evidence surfaced suggesting that two rival VPN services, TorGuard and VikingVPN, also experienced breaches that leaked encryption keys. In a statement, TorGuard said a secret key for a transport layer security certificate for *.torguardvpnaccess.com was stolen. The theft happened in a 2017 server breach. The stolen data related to a squid proxy certificate.

TorGuard officials said on Twitter that the private key was not on the affected server and that attackers “could do nothing with those keys.” Monday’s statement went on to say TorGuard didn’t remove the compromised server until early 2018. TorGuard also said it learned of VPN breaches last May, “and in a related development we filed a legal complaint against NordVPN.”

The breach happened nineteen months ago, but the company is only just disclosing it to the public. We don’t know exactly what was stolen and how it affects VPN security. More details are needed.

VPNs are a shadowy world. We use them to protect our Internet traffic when we’re on a network we don’t trust, but we’re forced to trust the VPN instead. Recommendations are hard. NordVPN’s website says that the company is based in Panama. Do we have any reason to trust it at all?

I’m curious what VPNs others use, and why they should be believed to be trustworthy.

Crown Sterling Claims to Factor RSA Keylengths First Factored Twenty Years Ago

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/crown_sterling_.html

Earlier this month, I made fun of a company called Crown Sterling, for…for…for being a company that deserves being made fun of.

This morning, the company announced that they “decrypted two 256-bit asymmetric public keys in approximately 50 seconds from a standard laptop computer.” Really. They did. This keylength is so small it has never been considered secure. It was too small to be part of the RSA Factoring Challenge when it was introduced in 1991. In 1977, when Ron Rivest, Adi Shamir, and Len Adelman first described RSA, they included a challenge with a 426-bit key. (It was factored in 1994.)

The press release goes on: “Crown Sterling also announced the consistent decryption of 512-bit asymmetric public key in as little as five hours also using standard computing.” They didn’t demonstrate it, but if they’re right they’ve matched a factoring record set in 1999. Five hours is significantly less than the 5.2 months it took in 1999, but slower than would be expected if Crown Sterling just used the 1999 techniques with modern CPUs and networks.

Is anyone taking this company seriously anymore? I honestly wouldn’t be surprised if this was a hoax press release. It’s not currently on the company’s website. (And, if it is a hoax, I apologize to Crown Sterling. I’ll post a retraction as soon as I hear from you.)

EDITED TO ADD: First, the press release is real. And second, I forgot to include the quote from CEO Robert Grant: “Today’s decryptions demonstrate the vulnerabilities associated with the current encryption paradigm. We have clearly demonstrated the problem which also extends to larger keys.”

People, this isn’t hard. Find an RSA Factoring Challenge number that hasn’t been factored yet and factor it. Once you do, the entire world will take you seriously. Until you do, no one will. And, bonus, you won’t have to reveal your super-secret world-destabilizing cryptanalytic techniques.

EDITED TO ADD (9/21): Others are laughing at this, too.

EDITED TO ADD (9/24): More commentary.