Tag Archives: keys

DNSSEC Keysigning Ceremony Postponed Because of Locked Safe

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/dnssec_keysigni.html

Interesting collision of real-world and Internet security:

The ceremony sees several trusted internet engineers (a minimum of three and up to seven) from across the world descend on one of two secure locations — one in El Segundo, California, just south of Los Angeles, and the other in Culpeper, Virginia — both in America, every three months.

Once in place, they run through a lengthy series of steps and checks to cryptographically sign the digital key pairs used to secure the internet’s root zone. (Here’s Cloudflare‘s in-depth explanation, and IANA’s PDF step-by-step guide.)

[…]

Only specific named people are allowed to take part in the ceremony, and they have to pass through several layers of security — including doors that can only be opened through fingerprint and retinal scans — before getting in the room where the ceremony takes place.

Staff open up two safes, each roughly one-metre across. One contains a hardware security module that contains the private portion of the KSK. The module is activated, allowing the KSK private key to sign keys, using smart cards assigned to the ceremony participants. These credentials are stored in deposit boxes and tamper-proof bags in the second safe. Each step is checked by everyone else, and the event is livestreamed. Once the ceremony is complete — which takes a few hours — all the pieces are separated, sealed, and put back in the safes inside the secure facility, and everyone leaves.

But during what was apparently a check on the system on Tuesday night — the day before the ceremony planned for 1300 PST (2100 UTC) Wednesday — IANA staff discovered that they couldn’t open one of the two safes. One of the locking mechanisms wouldn’t retract and so the safe stayed stubbornly shut.

As soon as they discovered the problem, everyone involved, including those who had flown in for the occasion, were told that the ceremony was being postponed. Thanks to the complexity of the problem — a jammed safe with critical and sensitive equipment inside — they were told it wasn’t going to be possible to hold the ceremony on the back-up date of Thursday, either.

New SHA-1 Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/new_sha-1_attac.html

There’s a new, practical, collision attack against SHA-1:

In this paper, we report the first practical implementation of this attack, and its impact on real-world security with a PGP/GnuPG impersonation attack. We managed to significantly reduce the complexity of collisions attack against SHA-1: on an Nvidia GTX 970, identical-prefix collisions can now be computed with a complexity of 261.2rather than264.7, and chosen-prefix collisions with a complexity of263.4rather than267.1. When renting cheap GPUs, this translates to a cost of 11k US$ for a collision,and 45k US$ for a chosen-prefix collision, within the means of academic researchers.Our actual attack required two months of computations using 900 Nvidia GTX 1060GPUs (we paid 75k US$ because GPU prices were higher, and we wasted some time preparing the attack).

It has practical applications:

We chose the PGP/GnuPG Web of Trust as demonstration of our chosen-prefix collision attack against SHA-1. The Web of Trust is a trust model used for PGP that relies on users signing each other’s identity certificate, instead of using a central PKI. For compatibility reasons the legacy branch of GnuPG (version 1.4) still uses SHA-1 by default for identity certification.

Using our SHA-1 chosen-prefix collision, we have created two PGP keys with different UserIDs and colliding certificates: key B is a legitimate key for Bob (to be signed by the Web of Trust), but the signature can be transferred to key A which is a forged key with Alice’s ID. The signature will still be valid because of the collision, but Bob controls key A with the name of Alice, and signed by a third party. Therefore, he can impersonate Alice and sign any document in her name.

From a news article:

The new attack is significant. While SHA1 has been slowly phased out over the past five years, it remains far from being fully deprecated. It’s still the default hash function for certifying PGP keys in the legacy 1.4 version branch of GnuPG, the open-source successor to PGP application for encrypting email and files. Those SHA1-generated signatures were accepted by the modern GnuPG branch until recently, and were only rejected after the researchers behind the new collision privately reported their results.

Git, the world’s most widely used system for managing software development among multiple people, still relies on SHA1 to ensure data integrity. And many non-Web applications that rely on HTTPS encryption still accept SHA1 certificates. SHA1 is also still allowed for in-protocol signatures in the Transport Layer Security and Secure Shell protocols.

Chrome Extension Stealing Cryptocurrency Keys and Passwords

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/chrome_extensio.html

A malicious Chrome extension surreptitiously steals Ethereum keys and passwords:

According to Denley, the extension is dangerous to users in two ways. First, any funds (ETH coins and ERC0-based tokens) managed directly inside the extension are at risk.

Denley says that the extension sends the private keys of all wallets created or managed through its interface to a third-party website located at erc20wallet[.]tk.

Second, the extension also actively injects malicious JavaScript code when users navigate to five well-known and popular cryptocurrency management platforms. This code steals login credentials and private keys, data that it’s sent to the same erc20wallet[.]tk third-party website.

Another example of how blockchain requires many single points of trust in order to be secure.

TPM-Fail Attacks Against Cryptographic Coprocessors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/11/tpm-fail_attack.html

Really interesting research: TPM-FAIL: TPM meets Timing and Lattice Attacks, by Daniel Moghimi, Berk Sunar, Thomas Eisenbarth, and Nadia Heninger.

Abstract: Trusted Platform Module (TPM) serves as a hardware-based root of trust that protects cryptographic keys from privileged system and physical adversaries. In this work, we per-form a black-box timing analysis of TPM 2.0 devices deployed on commodity computers. Our analysis reveals that some of these devices feature secret-dependent execution times during signature generation based on elliptic curves. In particular, we discovered timing leakage on an Intel firmware-based TPM as well as a hardware TPM. We show how this information allows an attacker to apply lattice techniques to recover 256-bit private keys for ECDSA and ECSchnorr signatures. On Intel fTPM, our key recovery succeeds after about1,300 observations and in less than two minutes. Similarly, we extract the private ECDSA key from a hardware TPM manufactured by STMicroelectronics, which is certified at CommonCriteria (CC) EAL 4+, after fewer than 40,000 observations. We further highlight the impact of these vulnerabilities by demonstrating a remote attack against a StrongSwan IPsecVPN that uses a TPM to generate the digital signatures for authentication. In this attack, the remote client recovers the server’s private authentication key by timing only 45,000 authentication handshakes via a network connection.

The vulnerabilities we have uncovered emphasize the difficulty of correctly implementing known constant-time techniques, and show the importance of evolutionary testing and transparent evaluation of cryptographic implementations.Even certified devices that claim resistance against attacks require additional scrutiny by the community and industry, as we learn more about these attacks.

These are real attacks, and take between 4-20 minutes to extract the key. Intel has a firmware update.

Attack website. News articles. Boing Boing post. Slashdot thread.

NordVPN Breached

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/nordvpn_breache.html

There was a successful attack against NordVPN:

Based on the command log, another of the leaked secret keys appeared to secure a private certificate authority that NordVPN used to issue digital certificates. Those certificates might be issued for other servers in NordVPN’s network or for a variety of other sensitive purposes. The name of the third certificate suggested it could also have been used for many different sensitive purposes, including securing the server that was compromised in the breach.

The revelations came as evidence surfaced suggesting that two rival VPN services, TorGuard and VikingVPN, also experienced breaches that leaked encryption keys. In a statement, TorGuard said a secret key for a transport layer security certificate for *.torguardvpnaccess.com was stolen. The theft happened in a 2017 server breach. The stolen data related to a squid proxy certificate.

TorGuard officials said on Twitter that the private key was not on the affected server and that attackers “could do nothing with those keys.” Monday’s statement went on to say TorGuard didn’t remove the compromised server until early 2018. TorGuard also said it learned of VPN breaches last May, “and in a related development we filed a legal complaint against NordVPN.”

The breach happened nineteen months ago, but the company is only just disclosing it to the public. We don’t know exactly what was stolen and how it affects VPN security. More details are needed.

VPNs are a shadowy world. We use them to protect our Internet traffic when we’re on a network we don’t trust, but we’re forced to trust the VPN instead. Recommendations are hard. NordVPN’s website says that the company is based in Panama. Do we have any reason to trust it at all?

I’m curious what VPNs others use, and why they should be believed to be trustworthy.

Crown Sterling Claims to Factor RSA Keylengths First Factored Twenty Years Ago

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/crown_sterling_.html

Earlier this month, I made fun of a company called Crown Sterling, for…for…for being a company that deserves being made fun of.

This morning, the company announced that they “decrypted two 256-bit asymmetric public keys in approximately 50 seconds from a standard laptop computer.” Really. They did. This keylength is so small it has never been considered secure. It was too small to be part of the RSA Factoring Challenge when it was introduced in 1991. In 1977, when Ron Rivest, Adi Shamir, and Len Adelman first described RSA, they included a challenge with a 426-bit key. (It was factored in 1994.)

The press release goes on: “Crown Sterling also announced the consistent decryption of 512-bit asymmetric public key in as little as five hours also using standard computing.” They didn’t demonstrate it, but if they’re right they’ve matched a factoring record set in 1999. Five hours is significantly less than the 5.2 months it took in 1999, but slower than would be expected if Crown Sterling just used the 1999 techniques with modern CPUs and networks.

Is anyone taking this company seriously anymore? I honestly wouldn’t be surprised if this was a hoax press release. It’s not currently on the company’s website. (And, if it is a hoax, I apologize to Crown Sterling. I’ll post a retraction as soon as I hear from you.)

EDITED TO ADD: First, the press release is real. And second, I forgot to include the quote from CEO Robert Grant: “Today’s decryptions demonstrate the vulnerabilities associated with the current encryption paradigm. We have clearly demonstrated the problem which also extends to larger keys.”

People, this isn’t hard. Find an RSA Factoring Challenge number that hasn’t been factored yet and factor it. Once you do, the entire world will take you seriously. Until you do, no one will. And, bonus, you won’t have to reveal your super-secret world-destabilizing cryptanalytic techniques.

EDITED TO ADD (9/21): Others are laughing at this, too.

EDITED TO ADD (9/24): More commentary.

Yubico Security Keys with a Crypto Flaw

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/07/yubico_security.html

Wow, is this an embarrassing bug:

Yubico is recalling a line of security keys used by the U.S. government due to a firmware flaw. The company issued a security advisory today that warned of an issue in YubiKey FIPS Series devices with firmware versions 4.4.2 and 4.4.4 that reduced the randomness of the cryptographic keys it generates. The security keys are used by thousands of federal employees on a daily basis, letting them securely log-on to their devices by issuing one-time passwords.

The problem in question occurs after the security key powers up. According to Yubico, a bug keeps “some predictable content” inside the device’s data buffer that could impact the randomness of the keys generated. Security keys with ECDSA signatures are in particular danger. A total of 80 of the 256 bits generated by the key remain static, meaning an attacker who gains access to several signatures could recreate the private key.

Boing Boing post.

MongoDB Offers Field Level Encryption

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/mongodb_offers_.html

MongoDB now has the ability to encrypt data by field:

MongoDB calls the new feature Field Level Encryption. It works kind of like end-to-end encrypted messaging, which scrambles data as it moves across the internet, revealing it only to the sender and the recipient. In such a “client-side” encryption scheme, databases utilizing Field Level Encryption will not only require a system login, but will additionally require specific keys to process and decrypt specific chunks of data locally on a user’s device as needed. That means MongoDB itself and cloud providers won’t be able to access customer data, and a database’s administrators or remote managers don’t need to have access to everything either.

For regular users, not much will be visibly different. If their credentials are stolen and they aren’t using multifactor authentication, an attacker will still be able to access everything the victim could. But the new feature is meant to eliminate single points of failure. With Field Level Encryption in place, a hacker who steals an administrative username and password, or finds a software vulnerability that gives them system access, still won’t be able to use these holes to access readable data.

On Security Tokens

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/on_security_tok.html

Mark Risher of Google extols the virtues of security keys:

I’ll say it again for the people in the back: with Security Keys, instead of the *user* needing to verify the site, the *site* has to prove itself to the key. Good security these days is about human factors; we have to take the onus off of the user as much as we can.

Furthermore, this “proof” from the site to the key is only permitted over close physical proximity (like USB, NFC, or Bluetooth). Unless the phisher is in the same room as the victim, they can’t gain access to the second factor.

This is why I keep using words like “transformative,” “revolutionary,” and “lit” (not so much anymore): SKs basically shrink your threat model from “anyone anywhere in the world who knows your password” to “people in the room with you right now.” Huge!

Cory Doctorow makes a critical point, that the system is only as good as its backup system:

I agree, but there’s an important caveat. Security keys usually have fallback mechanisms — some way to attach a new key to your account for when you lose or destroy your old key. These mechanisms may also rely on security keys, but chances are that they don’t (and somewhere down the line, there’s probably a fallback mechanism that uses SMS, or Google Authenticator, or an email confirmation loop, or a password, or an administrator who can be sweet talked by a social engineer).

So while the insight that traditional 2FA is really “something you know and something else you know, albeit only very recently,” security keys are “Something you know and something you have, which someone else can have, if they know something you know.”

And just because there are vulnerabilities in cell phone-based two-factor authentication systems doesn’t mean that they are useless. They’re still much better than traditional password-only authentication systems.

Stealing Ethereum by Guessing Weak Private Keys

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/04/stealing_ethere.html

Someone is stealing millions of dollars worth of Ethereum by guessing users’ private keys. Normally this should be impossible, but lots of keys seem to be very weak. Researchers are unsure how those weak keys are being generated and used.

Their paper is here.

G7 Comes Out in Favor of Encryption Backdoors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/04/g7_comes_out_in.html

From a G7 meeting of interior ministers in Paris this month, an “outcome document“:

Encourage Internet companies to establish lawful access solutions for their products and services, including data that is encrypted, for law enforcement and competent authorities to access digital evidence, when it is removed or hosted on IT servers located abroad or encrypted, without imposing any particular technology and while ensuring that assistance requested from internet companies is underpinned by the rule law and due process protection. Some G7 countries highlight the importance of not prohibiting, limiting, or weakening encryption;

There is a weird belief amongst policy makers that hacking an encryption system’s key management system is fundamentally different than hacking the system’s encryption algorithm. The difference is only technical; the effect is the same. Both are ways of weakening encryption.

El Chapo’s Encryption Defeated by Turning His IT Consultant

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/el_chapos_encry.html

Impressive police work:

In a daring move that placed his life in danger, the I.T. consultant eventually gave the F.B.I. his system’s secret encryption keys in 2011 after he had moved the network’s servers from Canada to the Netherlands during what he told the cartel’s leaders was a routine upgrade.

A Dutch article says that it’s a BlackBerry system.

El Chapo had his IT person install “…spyware called FlexiSPY on the ‘special phones’ he had given to his wife, Emma Coronel Aispuro, as well as to two of his lovers, including one who was a former Mexican lawmaker.” That same software was used by the FBI when his IT person turned over the keys. Yet again we learn the lesson that a backdoor can be used against you.

And it doesn’t have to be with the IT person’s permission. A good intelligence agency can use the IT person’s authorizations without his knowledge or consent. This is why the NSA hunts sysadmins.

Slashdot thread. Hacker News thread. Boing Boing post.

GCHQ on Quantum Key Distribution

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/08/gchq_on_quantum.html

The UK’s GCHQ delivers a brutally blunt assessment of quantum key distribution:

QKD protocols address only the problem of agreeing keys for encrypting data. Ubiquitous on-demand modern services (such as verifying identities and data integrity, establishing network sessions, providing access control, and automatic software updates) rely more on authentication and integrity mechanisms — such as digital signatures — than on encryption.

QKD technology cannot replace the flexible authentication mechanisms provided by contemporary public key signatures. QKD also seems unsuitable for some of the grand future challenges such as securing the Internet of Things (IoT), big data, social media, or cloud applications.

I agree with them. It’s a clever idea, but basically useless in practice. I don’t even think it’s anything more than a niche solution in a world where quantum computers have broken our traditional public-key algorithms.

Read the whole thing. It’s short.

Google Employees Use a Physical Token as Their Second Authentication Factor

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/07/google_employee.html

Krebs on Security is reporting that all 85,000 Google employees use two-factor authentication with a physical token.

A Google spokesperson said Security Keys now form the basis of all account access at Google.

“We have had no reported or confirmed account takeovers since implementing security keys at Google,” the spokesperson said. “Users might be asked to authenticate using their security key for many different apps/reasons. It all depends on the sensitivity of the app and the risk of the user at that point in time.”

Now Google is selling that security to its users:

On Wednesday, the company announced its new Titan security key, a device that protects your accounts by restricting two-factor authentication to the physical world. It’s available as a USB stick and in a Bluetooth variation, and like similar products by Yubico and Feitian, it utilizes the protocol approved by the FIDO alliance. That means it’ll be compatible with pretty much any service that enables users to turn on Universal 2nd Factor Authentication (U2F).

Major Bluetooth Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/07/major_bluetooth.html

Bluetooth has a serious security vulnerability:

In some implementations, the elliptic curve parameters are not all validated by the cryptographic algorithm implementation, which may allow a remote attacker within wireless range to inject an invalid public key to determine the session key with high probability. Such an attacker can then passively intercept and decrypt all device messages, and/or forge and inject malicious messages.

Paper. Website. Three news articles.

This is serious. Update your software now, and try not to think about all of the Bluetooth applications that can’t be updated.

Storing Encrypted Credentials In Git

Post Syndicated from Bozho original https://techblog.bozho.net/storing-encrypted-credentials-in-git/

We all know that we should not commit any passwords or keys to the repo with our code (no matter if public or private). Yet, thousands of production passwords can be found on GitHub (and probably thousands more in internal company repositories). Some have tried to fix that by removing the passwords (once they learned it’s not a good idea to store them publicly), but passwords have remained in the git history.

Knowing what not to do is the first and very important step. But how do we store production credentials. Database credentials, system secrets (e.g. for HMACs), access keys for 3rd party services like payment providers or social networks. There doesn’t seem to be an agreed upon solution.

I’ve previously argued with the 12-factor app recommendation to use environment variables – if you have a few that might be okay, but when the number of variables grow (as in any real application), it becomes impractical. And you can set environment variables via a bash script, but you’d have to store it somewhere. And in fact, even separate environment variables should be stored somewhere.

This somewhere could be a local directory (risky), a shared storage, e.g. FTP or S3 bucket with limited access, or a separate git repository. I think I prefer the git repository as it allows versioning (Note: S3 also does, but is provider-specific). So you can store all your environment-specific properties files with all their credentials and environment-specific configurations in a git repo with limited access (only Ops people). And that’s not bad, as long as it’s not the same repo as the source code.

Such a repo would look like this:

project
└─── production
|   |   application.properites
|   |   keystore.jks
└─── staging
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client1
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client2
|   |   application.properites
|   |   keystore.jks

Since many companies are using GitHub or BitBucket for their repositories, storing production credentials on a public provider may still be risky. That’s why it’s a good idea to encrypt the files in the repository. A good way to do it is via git-crypt. It is “transparent” encryption because it supports diff and encryption and decryption on the fly. Once you set it up, you continue working with the repo as if it’s not encrypted. There’s even a fork that works on Windows.

You simply run git-crypt init (after you’ve put the git-crypt binary on your OS Path), which generates a key. Then you specify your .gitattributes, e.g. like that:

secretfile filter=git-crypt diff=git-crypt
*.key filter=git-crypt diff=git-crypt
*.properties filter=git-crypt diff=git-crypt
*.jks filter=git-crypt diff=git-crypt

And you’re done. Well, almost. If this is a fresh repo, everything is good. If it is an existing repo, you’d have to clean up your history which contains the unencrypted files. Following these steps will get you there, with one addition – before calling git commit, you should call git-crypt status -f so that the existing files are actually encrypted.

You’re almost done. We should somehow share and backup the keys. For the sharing part, it’s not a big issue to have a team of 2-3 Ops people share the same key, but you could also use the GPG option of git-crypt (as documented in the README). What’s left is to backup your secret key (that’s generated in the .git/git-crypt directory). You can store it (password-protected) in some other storage, be it a company shared folder, Dropbox/Google Drive, or even your email. Just make sure your computer is not the only place where it’s present and that it’s protected. I don’t think key rotation is necessary, but you can devise some rotation procedure.

git-crypt authors claim to shine when it comes to encrypting just a few files in an otherwise public repo. And recommend looking at git-remote-gcrypt. But as often there are non-sensitive parts of environment-specific configurations, you may not want to encrypt everything. And I think it’s perfectly fine to use git-crypt even in a separate repo scenario. And even though encryption is an okay approach to protect credentials in your source code repo, it’s still not necessarily a good idea to have the environment configurations in the same repo. Especially given that different people/teams manage these credentials. Even in small companies, maybe not all members have production access.

The outstanding questions in this case is – how do you sync the properties with code changes. Sometimes the code adds new properties that should be reflected in the environment configurations. There are two scenarios here – first, properties that could vary across environments, but can have default values (e.g. scheduled job periods), and second, properties that require explicit configuration (e.g. database credentials). The former can have the default values bundled in the code repo and therefore in the release artifact, allowing external files to override them. The latter should be announced to the people who do the deployment so that they can set the proper values.

The whole process of having versioned environment-speific configurations is actually quite simple and logical, even with the encryption added to the picture. And I think it’s a good security practice we should try to follow.

The post Storing Encrypted Credentials In Git appeared first on Bozho's tech blog.