Tag Archives: Cryptography

The EARN-IT Act

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/the_earn-it_act.html

Prepare for another attack on encryption in the U.S. The EARN-IT Act purports to be about protecting children from predation, but it’s really about forcing the tech companies to break their encryption schemes:

The EARN IT Act would create a “National Commission on Online Child Sexual Exploitation Prevention” tasked with developing “best practices” for owners of Internet platforms to “prevent, reduce, and respond” to child exploitation. But far from mere recommendations, those “best practices” would be approved by Congress as legal requirements: if a platform failed to adhere to them, it would lose essential legal protections for free speech.

It’s easy to predict how Attorney General William Barr would use that power: to break encryption. He’s said over and over that he thinks the “best practice” is to force encrypted messaging systems to give law enforcement access to our private conversations. The Graham-Blumenthal bill would finally give Barr the power to demand that tech companies obey him or face serious repercussions, including both civil and criminal liability. Such a demand would put encryption providers like WhatsApp and Signal in an awful conundrum: either face the possibility of losing everything in a single lawsuit or knowingly undermine their users’ security, making all of us more vulnerable to online criminals.

Matthew Green has a long explanation of the bill and its effects:

The new bill, out of Lindsey Graham’s Judiciary committee, is designed to force providers to either solve the encryption-while-scanning problem, or stop using encryption entirely. And given that we don’t yet know how to solve the problem — and the techniques to do it are basically at the research stage of R&D — it’s likely that “stop using encryption” is really the preferred goal.

EARN IT works by revoking a type of liability called Section 230 that makes it possible for providers to operate on the Internet, by preventing the provider for being held responsible for what their customers do on a platform like Facebook. The new bill would make it financially impossible for providers like WhatsApp and Apple to operate services unless they conduct “best practices” for scanning their systems for CSAM.

Since there are no “best practices” in existence, and the techniques for doing this while preserving privacy are completely unknown, the bill creates a government-appointed committee that will tell technology providers what technology they have to use. The specific nature of the committee is byzantine and described within the bill itself. Needless to say, the makeup of the committee, which can include as few as zero data security experts, ensures that end-to-end encryption will almost certainly not be considered a best practice.

So in short: this bill is a backdoor way to allow the government to ban encryption on commercial services. And even more beautifully: it doesn’t come out and actually ban the use of encryption, it just makes encryption commercially infeasible for major providers to deploy, ensuring that they’ll go bankrupt if they try to disobey this committee’s recommendations.

It’s the kind of bill you’d come up with if you knew the thing you wanted to do was unconstitutional and highly unpopular, and you basically didn’t care.

Another criticism of the bill. Commentary by EPIC. Kinder analysis.

Sign a petition against this act.

More on Crypto AG

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/more_on_crypto_.html

One follow-on to the story of Crypto AG being owned by the CIA: this interview with a Washington Post reporter. The whole thing is worth reading or listening to, but I was struck by these two quotes at the end:

…in South America, for instance, many of the governments that were using Crypto machines were engaged in assassination campaigns. Thousands of people were being disappeared, killed. And I mean, they’re using Crypto machines, which suggests that the United States intelligence had a lot of insight into what was happening. And it’s hard to look back at that history now and see a lot of evidence of the United States going to any real effort to stop it or at least or even expose it.

[…]

To me, the history of the Crypto operation helps to explain how U.S. spy agencies became accustomed to, if not addicted to, global surveillance. This program went on for more than 50 years, monitoring the communications of more than 100 countries. I mean, the United States came to expect that kind of penetration, that kind of global surveillance capability. And as Crypto became less able to deliver it, the United States turned to other ways to replace that. And the Snowden documents tell us a lot about how they did that.

Pwned Passwords Padding (ft. Lava Lamps and Workers)

Post Syndicated from Junade Ali original https://blog.cloudflare.com/pwned-passwords-padding-ft-lava-lamps-and-workers/

Pwned Passwords Padding (ft. Lava Lamps and Workers)

Pwned Passwords Padding (ft. Lava Lamps and Workers)

The Pwned Passwords API (part of Troy Hunt’s Have I Been Pwned service) is used tens of millions of times each day, to alert users if their credentials are breached in a variety of online services, browser extensions and applications. Using Cloudflare, the API cached around 99% of requests, making it very efficient to run.

From today, we are offering a new security advancement in the Pwned Passwords API – API clients can receive responses padded with random data. This exists to effectively protect from any potential attack vectors which seek to use passive analysis of the size of API responses to identify which anonymised bucket a user is querying. I am hugely grateful to security researcher Matt Weir who I met at PasswordsCon in Stockholm and has explored proof-of-concept analysis of unpadded API responses in Pwned Passwords and has driven some of the work to consider the addition of padded responses.

Now, by passing a header of “Add-Padding” with a value of “true”, Pwned Passwords API users are able to request padded API responses (to a minimum of 800 entries with additional padding of a further 0-200 entries). The padding consists of randomly generated hash suffixes with the usage count field set to “0”.

Clients using this approach should seek to exclude 0-usage hash suffixes from breach validation. Given most implementations of PwnedPasswords simply do string matching on the suffix of a hash, there is no real performance implication of searching through the padding data. The false positive risk if a hash suffix matches a randomly generated response is very low, 619/(235*4) ≈ 4.44 x 10-40. This means you’d need to do about 1040 queries (roughly a query for every two atoms in the universe) to have a 44.4% probability of a collision.

In the future, non-padded responses will be deprecated outright (and all responses will be padded) once clients have had a chance to update.

You can see an example padded request by running the following curl request:

curl -H Add-Padding:true https://api.pwnedpasswords.com/range/FFFFF

API Structure

The high level structure of the Pwned Passwords API is discussed in my original blog post “Validating Leaked Passwords with k-Anonymity”. In essence, a client queries the API for the first 5 hexadecimal characters of a SHA-1 hashed password (amounting to 20 bits), a list of responses is returned with the remaining 35 hexadecimal characters of the hash (140 bits) of every breached password in the dataset. Each hash suffix is appended with a colon (“:”) and the number of times that given hash is found in the breached data.

An example query for FFFFF can be seen below, with the structure represented:

Pwned Passwords Padding (ft. Lava Lamps and Workers)

Without padding, the message length varies given the amount of hash suffixes in the bucket that is queried. It is known that it is possible to fingerprint TLS traffic based on the encrypted message length – fortunately this padding can be inserted in the API responses themselves (in the HTTP content). We can see the difference in download size between two unpadded buckets by running:

$ curl -so /dev/null https://api.pwnedpasswords.com/range/E0812 -w '%{size_download} bytes\n'
17022 bytes
$ curl -so /dev/null https://api.pwnedpasswords.com/range/834EF -w '%{size_download} bytes\n'
25118 bytes

The randomised padded entries can be found with with the “:0” suffix (indicating usage count); for example, below the top three entries are real entries whilst the last 3 represent padding data:

FF1A63ACC70BEA924C5DBABEE4B9B18C82D:10
FF8A0382AA9C8D9536EFBA77F261815334D:12
FFEE791CBAC0F6305CAF0CEE06BBE131160:2
2F811DCB8FF6098B838DDED4D478B0E4032:0
A1BABA501C55ACB6BDDC6D150CF585F20BE:0
9F31397459FF46B347A376F58506E420A58:0

Compression and Randomisation

Cloudflare supports both GZip and Brotli for compression. Compression benefits the PwnedPasswords API as responses are hexadecimal represented in ASCII. That said, compression is somewhat limited given the Avalanche Effect in hashing algorithms (that a small change in an input results in a completely different hash output) – each range searched has dramatically different input passwords and the remaining 35 characters of the SHA-1 hash are similarly different and have no expected similarity between them.

Accordingly, if one were to simply pad messages with null messages (say “000…”), the compression could mean that values padded to the same could be differentiated after compression. Similarly, even without compression, padding messages with the same data could still yield credible attacks.

Accordingly, padding is instead generated with randomly generated entries. In order to not break clients, such padding is generated to effectively look like legitimate hash suffixes. It is possible, however, to identify such messages as randomised padding. As the PwnedPasswords API contains a count field (distinguished by a colon after the remainder of the hex followed by a numerical count), randomised entries can be distinguished with a 0 usage.

Lava Lamps and Workers

I’ve written before about how cache optimisation of Pwned Passwords (including using Cloudflare Workers). Cloudflare Workers has an additional benefit that Workers run before elements are pulled from cache.

This allows for randomised entries to be generated dynamically on a request-to-request basis instead of being cached. This means the resulting randomised padding can differ from request-to-request (thus the amount of entries in a given response and the size of the response).

Cloudflare Workers supports the Web Crypto API, providing for exposure of a cryptographically sound random number generator. This random number generator is used to decide the variable amount of padding added to each response. Whilst a cryptographically secure random number generator is used for determining the amount of padding, as the random hexadecimal padding does not need to be indistinguishable from the real hashes, for computational performance we use the non-cryptographically secure Math.random() to generate the actual content of the padding.

Famously, one of the sources of entropy used in Cloudflare servers is sourced from Lava Lamps. By filming a wall of lava lamps in our San Francisco office (with individual photoreceptors picking up on random noise beyond the movement of the lava), we are able to generate random seed data used in servers (complimented by other sources of entropy along the way). This lava lamp entropy is used alongside the randomness sources on individual servers. This entropy is used to seed cryptographically secure pseudorandom number generators (CSPRNG) algorithms when generating random numbers. Cloudflare Workers runtime uses the v8 engine for JavaScript, with randomness sourced from /dev/urandom on the server itself.

Each response is padded to a minimum of 800 hash suffixes and a randomly generated amount of additional padding (from 200 entries).

This can be seen in two ways, firstly we can see that repeating the same responses to the same endpoint (with the underlying response being cached), yields a randomised amount of lines between 800 and 1000:

$ for run in {1..10}; do curl -s -H Add-Padding:true https://api.pwnedpasswords.com/range/FFFFF | wc -l; done
     831
     956
     870
     980
     932
     868
     856
     961
     912
     827

Secondly, we can see a randomised download size in each response:

$ for run in {1..10}; do curl -so /dev/null -H Add-Padding:true https://api.pwnedpasswords.com/range/FFFFF -w '%{size_download} bytes\n'; done
35572 bytes
37358 bytes
38194 bytes
33596 bytes
32304 bytes
37168 bytes
32532 bytes
37928 bytes
35154 bytes
33178 bytes

Future Work and Conclusion

There has been a considerable amount of research that has complemented the anonymity approach in Pwned Passwords. For example; Google and Stanford have written a paper about their approach implemented in Google Password Checkup, “Protecting accounts from credential stuffing with password breach alerting” [Usenix].

We have done a significant amount of work exploring more advanced protocols for Pwned Passwords, some of this work can be found in a paper we worked on with academics at Cornell University, “Protocols for Checking Compromised Credentials” [ACM or arXiv preprint]. This research offers two new protocols (FSB, frequency smoothing bucketization, and IDB, identifier-based bucketization) to further reduce information leakage in the APIs.

Further work is needed before these protocols gain the production worthiness that we’d like before they are shipped – but, as always, we’ll keep you updated here on our blog.

Crypto AG Was Owned by the CIA

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/crypto_ag_was_o.html

The Swiss cryptography firm Crypto AG sold equipment to governments and militaries around the world for decades after World War II. They were owned by the CIA:

But what none of its customers ever knew was that Crypto AG was secretly owned by the CIA in a highly classified partnership with West German intelligence. These spy agencies rigged the company’s devices so they could easily break the codes that countries used to send encrypted messages.

This isn’t really news. We have long known that Crypto AG was backdooring crypto equipment for the Americans. What is new is the formerly classified documents describing the details:

The decades-long arrangement, among the most closely guarded secrets of the Cold War, is laid bare in a classified, comprehensive CIA history of the operation obtained by The Washington Post and ZDF, a German public broadcaster, in a joint reporting project.

The account identifies the CIA officers who ran the program and the company executives entrusted to execute it. It traces the origin of the venture as well as the internal conflicts that nearly derailed it. It describes how the United States and its allies exploited other nations’ gullibility for years, taking their money and stealing their secrets.

The operation, known first by the code name “Thesaurus” and later “Rubicon,” ranks among the most audacious in CIA history.

EDITED TO ADD: More news articles. And a 1995 story on this. It’s not new news.

A New Clue for the Kryptos Sculpture

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/a_new_clue_for_.html

Jim Sanborn, who designed the Kryptos sculpture in a CIA courtyard, has released another clue to the still-unsolved part 4. I think he’s getting tired of waiting.

Did we mention Mr. Sanborn is 74?

Holding on to one of the world’s most enticing secrets can be stressful. Some would-be codebreakers have appeared at his home.

Many felt they had solved the puzzle, and wanted to check with Mr. Sanborn. Sometimes forcefully. Sometimes, in person.

Elonka Dunin, a game developer and consultant who has created a rich page of background information on the sculpture and oversees the best known online community of thousands of Kryptos fans, said that some who contact her (sometimes also at home) are obsessive and appear to have tipped into mental illness. “I am always gentle to them and do my best to listen to them,” she said.

Mr. Sanborn has set up systems to allow people to check their proposed solutions without having to contact him directly. The most recent incarnation is an email-based process with a fee of $50 to submit a potential solution. He receives regular inquiries, so far none of them successful.

The ongoing process is exhausting, he said, adding “It’s not something I thought I would be doing 30 years on.”

Another news article.

EDITED TO ADD (2/13): Another article.

One-Time Passwords Do Not Provide Non-Repudiation

Post Syndicated from Bozho original https://techblog.bozho.net/one-time-passwords-do-not-provide-non-repudiation/

The title is obvious and it could’ve been a tweet rather than a blogpost. But let me expand.

OTP, or one-time password, used to be mainstream with online banking. You get a dedicated device that generates a 6-digit code which you enter into your online banking in order to login or confirm a transaction. The device (token) is airgapped (with no internet connection), and has a preinstalled secret key that cannot be leaked. This key is symmetric and is used to (depending on the algorithm used) encrypt the current time, strip most of the ciphertext and turn the rest into 6 digits. The server owns the same secret key, does the same operation and compares the resulting 6 digits with the ones provided. If you want a more precise description of (T)OTP – check wikipedia.

Non-repudiation is a property of, in this case, a cryptographic scheme, that means the author of a message cannot deny the authorship. In the banking scenario, this means you cannot dispute that it was indeed you who entered those 6 digits (because, allegedly, only you possess the secret key because only you possess the token).

Hardware tokens are going out of fashion and are being replaced by mobile apps that are much easier to use and still represent a hardware device. There is one major difference, though – the phone is connected to the internet, which introduces additional security risks. But if we try to ignore them, then it’s fine to have the secret key on a smartphone, especially if it supports secure per-app storage, not accessible by 3rd party apps, or even hardware security module with encryption support, which the latest ones do.

Software “tokens” (mobile apps) often rely on OTPs as well, even though they don’t display the code to the user. This makes little sense, as OTPs are short precisely because people need to enter them. If you send an OTP via a RESTful API, why not just send the whole ciphertext? Or better – do a digital signature on a server-provided challenge.

But that’s not the biggest problem with OTPs. They don’t give the bank (or whoever decides to use them as a means to confirm transactions) non-repudiation. Because OTPs rely on a shared secret. This means the bank, and any (curious) admin knows the shared secret. Especially if the shared secret is dynamically generated based on some transaction-related data, as I’ve seen in some cases. A bank data breach or a malicious insider can easily generate the same 6 digits as you, with your secure token (dedicated hardware or mobile).

Of course, there are exceptions. Apparently there’s ITU-T X.1156 – a non-repudiation framework for OTPs. It involves trusted 3rd parties, though, which is unlikely a bank scenario. There are also HSMs (server-side hardware security modules) that can store secret keys without any option to leak them to the outside world that support TOTP natively. So the device generates the next 6 digits. Someone with access to the HSM can still generate such a sequence, but hopefully audit trail would indicate that this happened, and it is far less of an issue than just a database breach. Obviously (or, hopefully?) secrets are not stored in plaintext even outside of HSMs, but chances are they are wrapped with a master key in an HSM. Which means that someone can bulk-decrypt them and leak them.

And this “can” is very important. No matter how unlikely it is, due to internal security policies and whatnot, there is no cryptographic non-repudation for OTPs. A customer can easily deny entering the code to confirm a transaction. Whether this will hold in court is a complicated legal issue, where the processes in the bank can be audited to see if a breach is possible, whether any of the admins has a motive to abuse their privilege and so on, whether there’s logs of the user activities that can be linked to their device based on network logs from internet carriers, and so on. So you can achieve some limited level of non-repudiation through proper procedures. But it is not much different than authorizing the transaction with your password (at least it is not a shared secret, hopefully).

But why bother with all of that if you have a mobile app with internet connection? You can just create a digital signature on a unique per-transaction challenge and you get proper non-repudiation. The phone can generate a private key on enrollment and send its public key to the server, which can later be used to verify the signature. You don’t have the bandwidth limitation of the human brain that lead to the requirement of 6 digits, so the signature length can be arbitrarily long (even GPRS can handle digital signatures).

Using the right tool for the job doesn’t have just technical implications, it also has legal and business implications. If you can achieve non-repudiation with standard public-key cryptography, if your customers have phones with secure hardware modules and you are doing online mobile-app-based authentication and authorization anyway, just drop the OTP.
.

The post One-Time Passwords Do Not Provide Non-Repudiation appeared first on Bozho's tech blog.

Critical Windows Vulnerability Discovered by NSA

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/critical_window.html

Yesterday’s Microsoft Windows patches included a fix for a critical vulnerability in the system’s crypto library.

A spoofing vulnerability exists in the way Windows CryptoAPI (Crypt32.dll) validates Elliptic Curve Cryptography (ECC) certificates.

An attacker could exploit the vulnerability by using a spoofed code-signing certificate to sign a malicious executable, making it appear the file was from a trusted, legitimate source. The user would have no way of knowing the file was malicious, because the digital signature would appear to be from a trusted provider.

A successful exploit could also allow the attacker to conduct man-in-the-middle attacks and decrypt confidential information on user connections to the affected software.

That’s really bad, and you should all patch your system right now, before you finish reading this blog post.

This is a zero-day vulnerability, meaning that it was not detected in the wild before the patch was released. It was discovered by security researchers. Interestingly, it was discovered by NSA security researchers, and the NSA security advisory gives a lot more information about it than the Microsoft advisory does.

Exploitation of the vulnerability allows attackers to defeat trusted network connections and deliver executable code while appearing as legitimately trusted entities. Examples where validation of trust may be impacted include:

  • HTTPS connections
  • Signed files and emails
  • Signed executable code launched as user-mode processes

The vulnerability places Windows endpoints at risk to a broad range of exploitation vectors. NSA assesses the vulnerability to be severe and that sophisticated cyber actors will understand the underlying flaw very quickly and, if exploited, would render the previously mentioned platforms as fundamentally vulnerable.The consequences of not patching the vulnerability are severe and widespread. Remote exploitation tools will likely be made quickly and widely available.Rapid adoption of the patch is the only known mitigation at this time and should be the primary focus for all network owners.

Early yesterday morning, NSA’s Cybersecurity Directorate head Anne Neuberger hosted a media call where she talked about the vulnerability and — to my shock — took questions from the attendees. According to her, the NSA discovered this vulnerability as part of its security research. (If it found it in some other nation’s cyberweapons stash — my personal favorite theory — she declined to say.) She did not answer when asked how long ago the NSA discovered the vulnerability. She said that this is not the first time the NSA sent Microsoft a vulnerability to fix, but it was the first time it has publicly taken credit for the discovery. The reason is that the NSA is trying to rebuild trust with the security community, and this disclosure is a result of its new initiative to share findings more quickly and more often.

Barring any other information, I would take the NSA at its word here. So, good for it.

And — seriously — patch your systems now: Windows 10 and Windows Server 2016/2019. Assume that this vulnerability has already been weaponized, probably by criminals and certainly by major governments. Even assume that the NSA is using this vulnerability — why wouldn’t it?

Ars Technica article. Wired article. CERT advisory.

EDITED TO ADD: Washington Post article.

EDITED TO ADD (1/16): The attack was demonstrated in less than 24 hours.

Brian Krebs blog post.

Scallion – GPU Based Onion Hash Generator

Post Syndicated from Darknet original https://www.darknet.org.uk/2020/01/scallion-gpu-based-onion-hash-generator/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Scallion – GPU Based Onion Hash Generator

Scallion is a GPU-driven Onion Hash Generator written in C#, it lets you create vanity GPG keys and .onion addresses (for Tor’s hidden services) using OpenCL.

Scallion runs on Mono (tested in Arch Linux) and .NET 3.5+ (tested on Windows 7 and Server 2008)

Scallion was used to find collisions for every 32bit key id in the Web of Trust’s strong set demonstrating how insecure 32bit key ids are.

Read the rest of Scallion – GPU Based Onion Hash Generator now! Only available at Darknet.

New SHA-1 Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/new_sha-1_attac.html

There’s a new, practical, collision attack against SHA-1:

In this paper, we report the first practical implementation of this attack, and its impact on real-world security with a PGP/GnuPG impersonation attack. We managed to significantly reduce the complexity of collisions attack against SHA-1: on an Nvidia GTX 970, identical-prefix collisions can now be computed with a complexity of 261.2rather than264.7, and chosen-prefix collisions with a complexity of263.4rather than267.1. When renting cheap GPUs, this translates to a cost of 11k US$ for a collision,and 45k US$ for a chosen-prefix collision, within the means of academic researchers.Our actual attack required two months of computations using 900 Nvidia GTX 1060GPUs (we paid 75k US$ because GPU prices were higher, and we wasted some time preparing the attack).

It has practical applications:

We chose the PGP/GnuPG Web of Trust as demonstration of our chosen-prefix collision attack against SHA-1. The Web of Trust is a trust model used for PGP that relies on users signing each other’s identity certificate, instead of using a central PKI. For compatibility reasons the legacy branch of GnuPG (version 1.4) still uses SHA-1 by default for identity certification.

Using our SHA-1 chosen-prefix collision, we have created two PGP keys with different UserIDs and colliding certificates: key B is a legitimate key for Bob (to be signed by the Web of Trust), but the signature can be transferred to key A which is a forged key with Alice’s ID. The signature will still be valid because of the collision, but Bob controls key A with the name of Alice, and signed by a third party. Therefore, he can impersonate Alice and sign any document in her name.

From a news article:

The new attack is significant. While SHA1 has been slowly phased out over the past five years, it remains far from being fully deprecated. It’s still the default hash function for certifying PGP keys in the legacy 1.4 version branch of GnuPG, the open-source successor to PGP application for encrypting email and files. Those SHA1-generated signatures were accepted by the modern GnuPG branch until recently, and were only rejected after the researchers behind the new collision privately reported their results.

Git, the world’s most widely used system for managing software development among multiple people, still relies on SHA1 to ensure data integrity. And many non-Web applications that rely on HTTPS encryption still accept SHA1 certificates. SHA1 is also still allowed for in-protocol signatures in the Transport Layer Security and Secure Shell protocols.

RSA-240 Factored

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/12/rsa-240_factore.html

This just in:

We are pleased to announce the factorization of RSA-240, from RSA’s challenge list, and the computation of a discrete logarithm of the same size (795 bits):

RSA-240 = 12462036678171878406583504460810659043482037465167880575481878888328 966680118821085503603957027250874750986476843845862105486553797025393057189121 768431828636284694840530161441643046806687569941524699318570418303051254959437 1372159029236099 = 509435952285839914555051023580843714132648382024111473186660296521821206469746 700620316443478873837606252372049619334517 * 244624208838318150567813139024002896653802092578931401452041221336558477095178 155258218897735030590669041302045908071447

[…]

The previous records were RSA-768 (768 bits) in December 2009 [2], and a 768-bit prime discrete logarithm in June 2016 [3].

It is the first time that two records for integer factorization and discrete logarithm are broken together, moreover with the same hardware and software.

Both computations were performed with the Number Field Sieve algorithm, using the open-source CADO-NFS software [4].

The sum of the computation time for both records is roughly 4000 core-years, using Intel Xeon Gold 6130 CPUs as a reference (2.1GHz). A rough breakdown of the time spent in the main computation steps is as follows.

RSA-240 sieving: 800 physical core-years
RSA-240 matrix: 100 physical core-years
DLP-240 sieving: 2400 physical core-years
DLP-240 matrix: 700 physical core-years

The computation times above are well below the time that was spent with the previous 768-bit records. To measure how much of this can be attributed to Moore’s law, we ran our software on machines that are identical to those cited in the 768-bit DLP computation [3], and reach the conclusion that sieving for our new record size on these old machines would have taken 25% less time than the reported sieving time of the 768-bit DLP computation.

EDITED TO ADD (12/4): News article. Dan Goodin points out that the speed improvements were more due to improvements in the algorithms than from Moore’s Law.

TPM-Fail Attacks Against Cryptographic Coprocessors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/11/tpm-fail_attack.html

Really interesting research: TPM-FAIL: TPM meets Timing and Lattice Attacks, by Daniel Moghimi, Berk Sunar, Thomas Eisenbarth, and Nadia Heninger.

Abstract: Trusted Platform Module (TPM) serves as a hardware-based root of trust that protects cryptographic keys from privileged system and physical adversaries. In this work, we per-form a black-box timing analysis of TPM 2.0 devices deployed on commodity computers. Our analysis reveals that some of these devices feature secret-dependent execution times during signature generation based on elliptic curves. In particular, we discovered timing leakage on an Intel firmware-based TPM as well as a hardware TPM. We show how this information allows an attacker to apply lattice techniques to recover 256-bit private keys for ECDSA and ECSchnorr signatures. On Intel fTPM, our key recovery succeeds after about1,300 observations and in less than two minutes. Similarly, we extract the private ECDSA key from a hardware TPM manufactured by STMicroelectronics, which is certified at CommonCriteria (CC) EAL 4+, after fewer than 40,000 observations. We further highlight the impact of these vulnerabilities by demonstrating a remote attack against a StrongSwan IPsecVPN that uses a TPM to generate the digital signatures for authentication. In this attack, the remote client recovers the server’s private authentication key by timing only 45,000 authentication handshakes via a network connection.

The vulnerabilities we have uncovered emphasize the difficulty of correctly implementing known constant-time techniques, and show the importance of evolutionary testing and transparent evaluation of cryptographic implementations.Even certified devices that claim resistance against attacks require additional scrutiny by the community and industry, as we learn more about these attacks.

These are real attacks, and take between 4-20 minutes to extract the key. Intel has a firmware update.

Attack website. News articles. Boing Boing post. Slashdot thread.

Post-quantum TLS now supported in AWS KMS

Post Syndicated from Andrew Hopkins original https://aws.amazon.com/blogs/security/post-quantum-tls-now-supported-in-aws-kms/

AWS Key Management Service (AWS KMS) now supports post-quantum hybrid key exchange for the Transport Layer Security (TLS) network encryption protocol that is used when connecting to KMS API endpoints. In this post, I’ll tell you what post-quantum TLS is, what hybrid key exchange is, why it’s important, how to take advantage of this new feature, and how to give us feedback.

What is post-quantum TLS?

Post-quantum TLS is a feature that adds new, post-quantum cipher suites to the protocol. AWS implements TLS using s2n, a streamlined open source implementation of TLS. In June, 2019, AWS introduced post-quantum s2n, which implements two proposed post-quantum hybrid cipher suites specified in this IETF draft. The cipher suites specify a key exchange that provides the security protections of both the classical and post-quantum schemes.

Why is this important?

A large-scale quantum computer would break the current public key cryptography that is used for key exchange in every TLS connection. While a large-scale quantum computer is not available today, it’s still important to think about and plan for your long-term security needs. TLS traffic recorded today could be decrypted by a large-scale quantum computer in the future. If you’re developing applications that rely on the long-term confidentiality of data passed over a TLS connection, you should consider a plan to migrate to post-quantum cryptography before a large-scale quantum computer is available for use by potential adversaries. AWS is working to prepare for this future, and we want you to be well-prepared, too.

We’re offering this feature now instead of waiting so you’ll have a way to measure the potential performance impact to your applications, and you’ll have the additional benefit of the protection afforded by the proposed post-quantum schemes today. While we believe the use of this feature raises the already high security bar for connecting to KMS endpoints, these new cipher suites will have an impact on bandwidth utilization, latency, and could also create issues for intermediate systems that proxy TLS connections. We’d like to get feedback from you on the effectiveness of our implementation so we can improve it over time.

Some background on post-quantum TLS

Today, all requests to AWS KMS use TLS with one of two key exchange schemes:

FFDHE and ECDHE are industry standards for secure key exchange. KMS uses only ephemeral keys for TLS key negotiation; this ensures every connection uses a unique key and the compromise of one connection does not affect the security of another connection. They are secure today against known cryptanalysis techniques which use classic computers; however, they’re not secure against known attacks which use a large-scale quantum computer. In the future a sufficiently capable large-scale quantum computer could run Shor’s Algorithm to recover the TLS session key of a recorded session, and therefore gain access to the data inside. Protecting against a large-scale quantum computer requires using a post-quantum key exchange algorithm during the TLS handshake.

The possibility of large-scale quantum computing has spurred the development of new quantum-resistant cryptographic algorithms. The National Institute for Science and Technology (NIST) has started the process of standardizing post-quantum cryptographic algorithms. AWS contributed to two NIST submissions:

BIKE and SIKE are Key Encapsulation Mechanisms (KEMs); a KEM is a type of key exchange used to establish a shared symmetric key. Post-quantum s2n only uses ephemeral BIKE and SIKE keys.

The NIST standardization process isn’t expected to complete until 2024. Until then, there is a risk that the exclusive use of proposed algorithms like BIKE and SIKE could expose data in TLS connections to security vulnerabilities not yet discovered. To mitigate this risk and use these new post-quantum schemes safely today, we need a way to combine classical algorithms with the expected post-quantum security of the new algorithms submitted to NIST. The Hybrid Post-Quantum Key Encapsulation Methods for Transport Layer Security 1.2 IETF draft describes how to combine BIKE and SIKE with ECDHE to create two new cipher suites for TLS.

These two cipher suites use a hybrid key exchange that performs two independent key exchanges during the TLS handshake and then cryptographically combines the keys into a single TLS session key. This strategy combines the high assurance of a classical key exchange with the security of the proposed post-quantum key exchanges.

The effect of hybrid post-quantum TLS on performance

Post-quantum cipher suites have a different performance profile and bandwidth requirements than traditional cipher suites. We measured the latency and bandwidth for a single handshake on an EC2 C5 2x.large. This provides a baseline for what to expect when you connect to KMS with the SDK. Your exact results will depend on your hardware (CPU speed and number of cores), existing workloads (how often you call KMS and what other work your application performs), and your network (location and capacity).

BIKE and SIKE have different performance tradeoffs: BIKE has faster computations and large keys, and SIKE has slower computations and smaller keys. The tables below show the results of the AWS measurements. ECDHE, a classic cryptographic key exchange algorithm, is included by itself for comparison.

Table 1
TLS MessageECDHE (bytes)ECDHE w/ BIKE (bytes)ECDHE w/SIKE (bytes)
ClientHello139147147
ServerKeyExchange3292,875711
ClientKeyExchange662,610470

Table 1 shows the amount of data (in bytes) sent in each TLS message. The ClientHello message is larger for post-quantum cipher suites because they include a new ClientHello extension. The key exchange messages are larger because they include BIKE or SIKE messages.

Table 2
ItemECDHE (ms)ECDHE w/ BIKE (ms)ECDHE w/SIKE (ms)
Server processing time0.1120.2695.53
Client processing time0.100.3957.05
Total handshake time1.1925.58155.08

Table 2 shows the time (in milliseconds) a client and server in the same region take to complete a handshake. Server processing time includes: key generation, signing the server key exchange message, and processing the client key exchange message. The client processing time includes: verifying the server’s certificate, processing the server key exchange message, and generating the client key exchange message. The total time was measured on the client from the start of the handshake to the end and includes network transfer time. All connections used RSA authentication with a 2048-bit key, and ECDHE used the secp256r1 curve. The BIKE test used the BIKE-1 Level 1 parameter and the SIKE test used the SIKEp503 parameter.

A TLS handshake is only performed once to setup a new connection. The SDK will reuse connections for multiple KMS requests when possible. This means that you don’t want to include measurements of subsequent round-trips under an existing TLS session, otherwise you will skew your performance data.

How to use hybrid post-quantum cipher suites

Note: The “AWS CRT HTTP Client” in the aws-crt-dev-preview branch of the aws-sdk-java-v2 repository is a beta release. This beta release and your use are subject to Section 1.10 (“Beta Service Participation”) of the AWS Service Terms.

To use the post-quantum cipher suites with AWS KMS, you’ll need the Developer Previews of the Java SDK 2.0 and the AWS Common Runtime. You’ll need to configure the AWS Common Runtime HTTP client to use s2n’s post-quantum hybrid cipher suites, and configure the AWS Java SDK 2.0 to use that HTTP client. This client can then be used when connecting to any KMS endpoints, but only those endpoints that are not using FIPS 140-2 validated crypto for the TLS termination. For example, kms.<region>.amazonaws.com supports the use of post-quantum cipher suites, while kms-fips.<region>.amazonaws.com does not.

To see a complete example of everything setup check out the example application here.
 

Figure 1: GitHub and package layout

Figure 1: GitHub and package layout

Figure 1 shows the GitHub and package layout. The steps below will walk you through building and configuring the SDK.

  1. Download the Java SDK v2 Common Runtime Developer Preview:
    
    $ git clone [email protected]:aws/aws-sdk-java-v2.git --branch aws-crt-dev-preview
    $ cd aws-sdk-java-v2
    

  2. Build the aws-crt-client JAR:
    
    $ mvn install -Pquick
    

  3. In your project add the AWS Common Runtime client to your Maven Dependencies:
    
    <dependency>
        <groupId>software.amazon.awssdk</groupId>
        <artifactId>aws-crt-client</artifactId>
        <version>2.10.7-SNAPSHOT</version>
    </dependency>
    

  4. Configure the new SDK and cipher suite in your application’s existing initialization code:
    
    if(!TLS_CIPHER_KMS_PQ_TLSv1_0_2019_06.isSupported()){
        throw new RuntimeException("Post Quantum Ciphers not supported on this Platform");
    }
    SdkAsyncHttpClient awsCrtHttpClient = AwsCrtAsyncHttpClient.builder()
              .tlsCipherPreference(TLS_CIPHER_KMS_PQ_TLSv1_0_2019_06)
              .build();
    KmsAsyncClient kms = KmsAsyncClient.builder()
             .httpClient(awsCrtHttpClient)
             .build();
    ListKeysResponse response = kms.listKeys().get();
    

Now, all connections made to AWS KMS in supported regions will use the new hybrid post-quantum cipher suites.

Things to try

Here are some ideas about how to use this post-quantum-enabled client:

  • Run load tests and benchmarks. These new cipher suites perform differently than traditional key exchange algorithms. You might need to adjust your connection timeouts to allow for the longer handshake times or, if you’re running inside an AWS Lambda function, extend the execution timeout setting.
  • Try connecting from different locations. Depending on the network path your request takes, you might discover that intermediate hosts, proxies, or firewalls with deep packet inspection (DPI) block the request. This could be due to the new cipher suites in the ClientHello or the larger key exchange messages. If this is the case, you might need to work with your Security team or IT administrators to update the relevant configuration to unblock the new TLS cipher suites. We’d like to hear from your about how your infrastructure interacts with this new variant of TLS traffic.

More info

If you’re interested to learn more about post-quantum cryptography check out:

Conclusion

In this blog post, I introduced you to the topic of post-quantum security and covered what AWS and NIST are doing to address the issue. I also showed you how to begin experimenting with hybrid post-quantum key exchange algorithms for TLS when connecting to KMS endpoints.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about how to configure the HTTP client or its interaction with KMS endpoints, please start a new thread on the AWS KMS discussion forum.

Former FBI General Counsel Jim Baker Chooses Encryption Over Backdoors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/former_fbi_gene.html

In an extraordinary essay, the former FBI general counsel Jim Baker makes the case for strong encryption over government-mandated backdoors:

In the face of congressional inaction, and in light of the magnitude of the threat, it is time for governmental authorities­ — including law enforcement­ — to embrace encryption because it is one of the few mechanisms that the United States and its allies can use to more effectively protect themselves from existential cybersecurity threats, particularly from China. This is true even though encryption will impose costs on society, especially victims of other types of crime.

[…]

I am unaware of a technical solution that will effectively and simultaneously reconcile all of the societal interests at stake in the encryption debate, such as public safety, cybersecurity and privacy as well as simultaneously fostering innovation and the economic competitiveness of American companies in a global marketplace.

[…]

All public safety officials should think of protecting the cybersecurity of the United States as an essential part of their core mission to protect the American people and uphold the Constitution. And they should be doing so even if there will be real and painful costs associated with such a cybersecurity-forward orientation. The stakes are too high and our current cybersecurity situation too grave to adopt a different approach.

Basically, he argues that the security value of strong encryption greatly outweighs the security value of encryption that can be bypassed. He endorses a “defense dominant” strategy for Internet security.

Keep in mind that Baker led the FBI’s legal case against Apple regarding the San Bernardino shooter’s encrypted iPhone. In writing this piece, Baker joins the growing list of former law enforcement and national security senior officials who have come out in favor of strong encryption over backdoors: Michael Hayden, Michael Chertoff, Richard Clarke, Ash Carter, William Lynn, and Mike McConnell.

Edward Snowden also agrees.

EDITED TO ADD: Good commentary from Cory Doctorow.

Tales from the Crypt(o team)

Post Syndicated from Nick Sullivan original https://blog.cloudflare.com/tales-from-the-crypt-o-team/

Tales from the Crypt(o team)

Tales from the Crypt(o team)

Halloween season is upon us. This week we’re sharing a series of blog posts about work being done at Cloudflare involving cryptography, one of the spookiest technologies around. So bookmark this page and come back every day for tricks, treats, and deep technical content.

A long-term mission

Cryptography is one of the most powerful technological tools we have, and Cloudflare has been at the forefront of using cryptography to help build a better Internet. Of course, we haven’t been alone on this journey. Making meaningful changes to the way the Internet works requires time, effort, experimentation, momentum, and willing partners. Cloudflare has been involved with several multi-year efforts to leverage cryptography to help make the Internet better.

Here are some highlights to expect this week:

  • We’re renewing Cloudflare’s commitment to privacy-enhancing technologies by sharing some of the recent work being done on Privacy Pass
  • We’re helping forge a path to a quantum-safe Internet by sharing some of the results of the Post-quantum Cryptography experiment
  • We’re sharing the rust-based software we use to power time.cloudflare.com
  • We’re doing a deep dive into the technical details of Encrypted DNS
  • We’re announcing support for a new technique we developed with industry partners to help keep TLS private keys more secure

The milestones we’re sharing this week would not be possible without partnerships with companies, universities, and individuals working in good faith to help build a better Internet together. Hopefully, this week provides a fun peek into the future of the Internet.

Dark Web Site Taken Down without Breaking Encryption

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/dark_web_site_t.html

The US Department of Justice unraveled a dark web child-porn website, leading to the arrest of 337 people in at least 18 countries. This was all accomplished not through any backdoors in communications systems, but by analyzing the bitcoin transactions and following the money:

Welcome to Video made money by charging fees in bitcoin, and gave each user a unique bitcoin wallet address when they created an account. Son operated the site as a Tor hidden service, a dark web site with a special address that helps mask the identity of the site’s host and its location. But Son and others made mistakes that allowed law enforcement to track them. For example, according to the indictment, very basic assessments of the Welcome to Video website revealed two unconcealed IP addresses managed by a South Korean internet service provider and assigned to an account that provided service to Son’s home address. When agents searched Son’s residence, they found the server running Welcome to Video.

To “follow the money,” as officials put it in Wednesday’s press conference, law enforcement agents sent fairly small amounts of bitcoin­ — roughly equivalent at the time to $125 to $290­ — to the bitcoin wallets Welcome to Video listed for payments. Since the bitcoin blockchain leaves all transactions visible and verifiable, they could observe the currency in these wallets being transferred to another wallet. Law enforcement learned from a bitcoin exchange that the second wallet was registered to Son with his personal phone number and one of his personal email addresses.

Remember this the next time some law enforcement official tells us that they’re powerless to investigate crime without breaking cryptography for everyone.

More news articles. The indictment is here. Some of it is pretty horrifying to read.

NordVPN Breached

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/nordvpn_breache.html

There was a successful attack against NordVPN:

Based on the command log, another of the leaked secret keys appeared to secure a private certificate authority that NordVPN used to issue digital certificates. Those certificates might be issued for other servers in NordVPN’s network or for a variety of other sensitive purposes. The name of the third certificate suggested it could also have been used for many different sensitive purposes, including securing the server that was compromised in the breach.

The revelations came as evidence surfaced suggesting that two rival VPN services, TorGuard and VikingVPN, also experienced breaches that leaked encryption keys. In a statement, TorGuard said a secret key for a transport layer security certificate for *.torguardvpnaccess.com was stolen. The theft happened in a 2017 server breach. The stolen data related to a squid proxy certificate.

TorGuard officials said on Twitter that the private key was not on the affected server and that attackers “could do nothing with those keys.” Monday’s statement went on to say TorGuard didn’t remove the compromised server until early 2018. TorGuard also said it learned of VPN breaches last May, “and in a related development we filed a legal complaint against NordVPN.”

The breach happened nineteen months ago, but the company is only just disclosing it to the public. We don’t know exactly what was stolen and how it affects VPN security. More details are needed.

VPNs are a shadowy world. We use them to protect our Internet traffic when we’re on a network we don’t trust, but we’re forced to trust the VPN instead. Recommendations are hard. NordVPN’s website says that the company is based in Panama. Do we have any reason to trust it at all?

I’m curious what VPNs others use, and why they should be believed to be trustworthy.

Calculating the Benefits of the Advanced Encryption Standard

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/calculating_the.html

NIST has completed a study — it was published last year, but I just saw it recently — calculating the costs and benefits of the Advanced Encryption Standard.

From the conclusion:

The result of performing that operation on the series of cumulated benefits extrapolated for the 169 survey respondents finds that present value of benefits from today’s perspective is approximately $8.9 billion. On the other hand, the present value of NIST’s costs from today’s perspective is $127 million. Thus, the NPV from today’s perspective is $8,772,000,000; the B/C ratio is therefore 70.2/1; and a measure (explained in detail in Section 6.1) of the IRR for the alternative investment perspective is 31%; all are indicators of a substantial economic impact.

Extending the approach of looking back from 2017 to the larger national economy required the selection of economic sectors best represented by the 169 survey respondents. The economic sectors represented by ten or more survey respondents include the following: agriculture; construction; manufacturing; retail trade; transportation and warehousing; information; real estate rental and leasing; professional, scientific, and technical services; management services; waste management; educational services; and arts and entertainment. Looking at the present value of benefits and costs from 2017’s perspective for these economic sectors finds that the present value of benefits rises to approximately $251 billion while the present value of NIST’s costs from today’s perspective remains the same at $127 million. Therefore, the NPV of the benefits of the AES program to the national economy from today’s perspective is $250,473,200,000; the B/C ratio is roughly 1976/1; and the appropriate, alternative (explained in Section 6.1) IRR and investing proceeds at the social rate of return is 53.6%.

The report contains lots of facts and figures relevant to crypto policy debates, including the chaotic nature of crypto markets in the mid-1990s, the number of approved devices and libraries of various kinds since then, other standards that invoke AES, and so on.

There’s a lot to argue with about the methodology and the assumptions. I don’t know if I buy that the benefits of AES to the economy are in the billions of dollars, mostly because we in the cryptographic community would have come up with alternative algorithms to triple-DES that would have been accepted and used. Still, I like seeing this kind of analysis about security infrastructure. Security is an enabling technology; it doesn’t do anything by itself, but instead allows all sorts of things to be done. And I certainly agree that the benefits of a standardized encryption algorithm that we all trust and use outweigh the cost by orders of magnitude.

And this isn’t the first time NIST has conducted economic impact studies. It released a study of the economic impact of DES in 2001.

Factoring 2048-bit Numbers Using 20 Million Qubits

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/factoring_2048-.html

This theoretical paper shows how to factor 2048-bit RSA moduli with a 20-million qubit quantum computer in eight hours. It’s interesting work, but I don’t want overstate the risk.

We know from Shor’s Algorithm that both factoring and discrete logs are easy to solve on a large, working quantum computer. Both of those are currently beyond our technological abilities. We barely have quantum computers with 50 to 100 qubits. Extending this requires advances not only in the number of qubits we can work with, but in making the system stable enough to read any answers. You’ll hear this called “error rate” or “coherence” — this paper talks about “noise.”

Advances are hard. At this point, we don’t know if they’re “send a man to the moon” hard or “faster-than-light travel” hard. If I were guessing, I would say they’re the former, but still harder than we can accomplish with our current understanding of physics and technology.

I write about all this generally, and in detail, here. (Short summary: Our work on quantum-resistant algorithms is outpacing our work on quantum computers, so we’ll be fine in the short run. But future theoretical work on quantum computing could easily change what “quantum resistant” means, so it’s possible that public-key cryptography will simply not be possible in the long run. That’s not terrible, though; we have a lot of good scalable secret-key systems that do much the same things.)

Blockchain Overview – Types, Use-Cases, Security and Usability [slides]

Post Syndicated from Bozho original https://techblog.bozho.net/blockchain-overview-types-use-cases-security-and-usability-slides/

This week I have a talk on a meetup about blockchain beyond the hype – its actual implementation issues and proper use-cases.

The slides can be found here:

The main takeaways are:

  • Think of blockchain in specifics, not in high-level “magic”
  • Tamper-evident data structures are cool, you should be familiar with them – merkle trees, hash chains, etc. They are useful for other things as well, e.g. certificate transparency
  • Blockchain and its cryptography is perfect for protecting data integrity, which is part of the CIA triad of information security
  • Many proposed use-cases can be solved with centralized solutions + trusted timestamps instead
  • Usability is a major issue when it comes to wider adoption

As with anything in technology – use the right tool for the job, as no solution solves every problem.

The post Blockchain Overview – Types, Use-Cases, Security and Usability [slides] appeared first on Bozho's tech blog.