Tag Archives: cryptanalysis

Apple Announces Post-Quantum Encryption Algorithms for iMessage

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/02/apple-announces-post-quantum-encryption-algorithms-for-imessage.html

Apple announced PQ3, its post-quantum encryption standard based on the Kyber secure key-encapsulation protocol, one of the post-quantum algorithms selected by NIST in 2022.

There’s a lot of detail in the Apple blog post, and more in Douglas Stabila’s security analysis.

I am of two minds about this. On the one hand, it’s probably premature to switch to any particular post-quantum algorithms. The mathematics of cryptanalysis for these lattice and other systems is still rapidly evolving, and we’re likely to break more of them—and learn a lot in the process—over the coming few years. But if you’re going to make the switch, this is an excellent choice. And Apple’s ability to do this so efficiently speaks well about its algorithmic agility, which is probably more important than its particular cryptographic design. And it is probably about the right time to worry about, and defend against, attackers who are storing encrypted messages in hopes of breaking them later on future quantum computers.

Improving the Cryptanalysis of Lattice-Based Public-Key Algorithms

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/02/improving-the-cryptanalysis-of-lattice-based-public-key-algorithms.html

The winner of the Best Paper Award at Crypto this year was a significant improvement to lattice-based cryptanalysis.

This is important, because a bunch of NIST’s post-quantum options base their security on lattice problems.

I worry about standardizing on post-quantum algorithms too quickly. We are still learning a lot about the security of these systems, and this paper is an example of that learning.

News story.

Improving Shor’s Algorithm

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/01/improving-shors-algorithm.html

We don’t have a useful quantum computer yet, but we do have quantum algorithms. Shor’s algorithm has the potential to factor large numbers faster than otherwise possible, which—if the run times are actually feasible—could break both the RSA and Diffie-Hellman public-key algorithms.

Now, computer scientist Oded Regev has a significant speed-up to Shor’s algorithm, at the cost of more storage.

Details are in this article. Here’s the result:

The improvement was profound. The number of elementary logical steps in the quantum part of Regev’s algorithm is proportional to n1.5 when factoring an n-bit number, rather than n2 as in Shor’s algorithm. The algorithm repeats that quantum part a few dozen times and combines the results to map out a high-dimensional lattice, from which it can deduce the period and factor the number. So the algorithm as a whole may not run faster, but speeding up the quantum part by reducing the number of required steps could make it easier to put it into practice.

Of course, the time it takes to run a quantum algorithm is just one of several considerations. Equally important is the number of qubits required, which is analogous to the memory required to store intermediate values during an ordinary classical computation. The number of qubits that Shor’s algorithm requires to factor an n-bit number is proportional to n, while Regev’s algorithm in its original form requires a number of qubits proportional to n1.5—a big difference for 2,048-bit numbers.

Again, this is all still theoretical. But now it’s theoretically faster.

Oded Regev’s paper.

This is me from 2018 on the potential for quantum cryptanalysis. I still believe now what I wrote then.

Model Extraction Attack on Neural Networks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/10/model-extraction-attack-on-neural-networks.html

Adi Shamir et al. have a new model extraction attack on neural networks:

Polynomial Time Cryptanalytic Extraction of Neural Network Models

Abstract: Billions of dollars and countless GPU hours are currently spent on training Deep Neural Networks (DNNs) for a variety of tasks. Thus, it is essential to determine the difficulty of extracting all the parameters of such neural networks when given access to their black-box implementations. Many versions of this problem have been studied over the last 30 years, and the best current attack on ReLU-based deep neural networks was presented at Crypto’20 by Carlini, Jagielski, and Mironov. It resembles a differential chosen plaintext attack on a cryptosystem, which has a secret key embedded in its black-box implementation and requires a polynomial number of queries but an exponential amount of time (as a function of the number of neurons).

In this paper, we improve this attack by developing several new techniques that enable us to extract with arbitrarily high precision all the real-valued parameters of a ReLU-based DNN using a polynomial number of queries and a polynomial amount of time. We demonstrate its practical efficiency by applying it to a full-sized neural network for classifying the CIFAR10 dataset, which has 3072 inputs, 8 hidden layers with 256 neurons each, and about 1.2 million neuronal parameters. An attack following the approach by Carlini et al. requires an exhaustive search over 2256 possibilities. Our attack replaces this with our new techniques, which require only 30 minutes on a 256-core computer.

Security Analysis of Threema

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/security-analysis-of-threema.html

A group of Swiss researchers have published an impressive security analysis of Threema.

We provide an extensive cryptographic analysis of Threema, a Swiss-based encrypted messaging application with more than 10 million users and 7000 corporate customers. We present seven different attacks against the protocol in three different threat models. As one example, we present a cross-protocol attack which breaks authentication in Threema and which exploits the lack of proper key separation between different sub-protocols. As another, we demonstrate a compression-based side-channel attack that recovers users’ long-term private keys through observation of the size of Threema encrypted back-ups. We discuss remediations for our attacks and draw three wider lessons for developers of secure protocols.

From a news article:

Threema has more than 10 million users, which include the Swiss government, the Swiss army, German Chancellor Olaf Scholz, and other politicians in that country. Threema developers advertise it as a more secure alternative to Meta’s WhatsApp messenger. It’s among the top Android apps for a fee-based category in Switzerland, Germany, Austria, Canada, and Australia. The app uses a custom-designed encryption protocol in contravention of established cryptographic norms.

The company is performing the usual denials and deflections:

In a web post, Threema officials said the vulnerabilities applied to an old protocol that’s no longer in use. It also said the researchers were overselling their findings.

“While some of the findings presented in the paper may be interesting from a theoretical standpoint, none of them ever had any considerable real-world impact,” the post stated. “Most assume extensive and unrealistic prerequisites that would have far greater consequences than the respective finding itself.”

Left out of the statement is that the protocol the researchers analyzed is old because they disclosed the vulnerabilities to Threema, and Threema updated it.

Breaking RSA with a Quantum Computer

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/breaking-rsa-with-a-quantum-computer.html

A group of Chinese researchers have just published a paper claiming that they can—although they have not yet done so—break 2048-bit RSA. This is something to take seriously. It might not be correct, but it’s not obviously wrong.

We have long known from Shor’s algorithm that factoring with a quantum computer is easy. But it takes a big quantum computer, on the orders of millions of qbits, to factor anything resembling the key sizes we use today. What the researchers have done is combine classical lattice reduction factoring techniques with a quantum approximate optimization algorithm. This means that they only need a quantum computer with 372 qbits, which is well within what’s possible today. (The IBM Osprey is a 433-qbit quantum computer, for example. Others are on their way as well.)

The Chinese group didn’t have that large a quantum computer to work with. They were able to factor 48-bit numbers using a 10-qbit quantum computer. And while there are always potential problems when scaling something like this up by a factor of 50, there are no obvious barriers.

Honestly, most of the paper is over my head—both the lattice-reduction math and the quantum physics. And there’s the nagging question of why the Chinese government didn’t classify this research. But…wow…maybe…and yikes! Or not.

Factoring integers with sublinear resources on a superconducting quantum processor

Abstract: Shor’s algorithm has seriously challenged information security based on public key cryptosystems. However, to break the widely used RSA-2048 scheme, one needs millions of physical qubits, which is far beyond current technical capabilities. Here, we report a universal quantum algorithm for integer factorization by combining the classical lattice reduction with a quantum approximate optimization algorithm (QAOA). The number of qubits required is O(logN/loglogN ), which is sublinear in the bit length of the integer N , making it the most qubit-saving factorization algorithm to date. We demonstrate the algorithm experimentally by factoring integers up to 48 bits with 10 superconducting qubits, the largest integer factored on a quantum device. We estimate that a quantum circuit with 372 physical qubits and a depth of thousands is necessary to challenge RSA-2048 using our algorithm. Our study shows great promise in expediting the application of current noisy quantum computers, and paves the way to factor large integers of realistic cryptographic significance.

In email, Roger Grimes told me: “Apparently what happened is another guy who had previously announced he was able to break traditional asymmetric encryption using classical computers…but reviewers found a flaw in his algorithm and that guy had to retract his paper. But this Chinese team realized that the step that killed the whole thing could be solved by small quantum computers. So they tested and it worked.”

EDITED TO ADD: One of the issues with the algorithm is that it relies on a recent factoring paper by Claus Schnorr. It’s a controversial paper; and despite the “this destroys the RSA cryptosystem” claim in the abstract, it does nothing of the sort. Schnorr’s algorithm works well with smaller moduli—around the same order as ones the Chinese group has tested—but falls apart at larger sizes. At this point, nobody understands why. The Chinese paper claims that their quantum techniques get around this limitation (I think that’s what’s behind Grimes’s comment) but don’t give any details—and they haven’t tested it with larger moduli. So if it’s true that the Chinese paper depends on this Schnorr technique that doesn’t scale, the techniques in this Chinese paper won’t scale, either. (On the other hand, if it does scale then I think it also breaks a bunch of lattice-based public-key cryptosystems.)

I am much less worried that this technique will work now. But this is something the IBM quantum computing people can test right now.

EDITED TO ADD (1/4): A reporter just asked me my gut feel about this. I replied that I don’t think this will break RSA. Several times a year the cryptography community received “breakthroughs” from people outside the community. That’s why we created the RSA Factoring Challenge: to force people to provide proofs of their claims. In general, the smart bet is on the new techniques not working. But someday, that bet will be wrong. Is it today? Probably not. But it could be. We’re in the worst possible position right now: we don’t have the facts to know. Someone needs to implement the quantum algorithm and see.

EDITED TO ADD (1/5): Scott Aaronson’s take is a “no”:

In the new paper, the authors spend page after page saying-without-saying that it might soon become possible to break RSA-2048, using a NISQ (i.e., non-fault-tolerant) quantum computer. They do so via two time-tested strategems:

  1. the detailed exploration of irrelevancies (mostly, optimization of the number of qubits, while ignoring the number of gates), and
  2. complete silence about the one crucial point.

Then, finally, they come clean about the one crucial point in a single sentence of the Conclusion section:

It should be pointed out that the quantum speedup of the algorithm is unclear due to the ambiguous convergence of QAOA.

“Unclear” is an understatement here. It seems to me that a miracle would be required for the approach here to yield any benefit at all, compared to just running the classical Schnorr’s algorithm on your laptop. And if the latter were able to break RSA, it would’ve already done so.

All told, this is one of the most actively misleading quantum computing papers I’ve seen in 25 years, and I’ve seen … many.

EDITED TO ADD (1/7): More commentary. Again: no need to panic.

EDITED TO ADD (1/12): Peter Shor has suspicions.

Charles V of Spain Secret Code Cracked

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/charles-v-of-spain-secret-code-cracked.html

Diplomatic code cracked after 500 years:

In painstaking work backed by computers, Pierrot found “distinct families” of about 120 symbols used by Charles V. “Whole words are encrypted with a single symbol” and the emperor replaced vowels coming after consonants with marks, she said, an inspiration probably coming from Arabic.

In another obstacle, he used meaningless symbols to mislead any adversary trying to decipher the message.

The breakthrough came in June when Pierrot managed to make out a phrase in the letter, and the team then cracked the code with the help of Camille Desenclos, a historian. “It was painstaking and long work but there was really a breakthrough that happened in one day, where all of a sudden we had the right hypothesis,” she said.

Breaking the Zeppelin Ransomware Encryption Scheme

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/breaking-the-zeppelin-ransomware-encryption-scheme.html

Brian Krebs writes about how the Zeppelin ransomware encryption scheme was broken:

The researchers said their break came when they understood that while Zeppelin used three different types of encryption keys to encrypt files, they could undo the whole scheme by factoring or computing just one of them: An ephemeral RSA-512 public key that is randomly generated on each machine it infects.

“If we can recover the RSA-512 Public Key from the registry, we can crack it and get the 256-bit AES Key that encrypts the files!” they wrote. “The challenge was that they delete the [public key] once the files are fully encrypted. Memory analysis gave us about a 5-minute window after files were encrypted to retrieve this public key.”

Unit 221B ultimately built a “Live CD” version of Linux that victims could run on infected systems to extract that RSA-512 key. From there, they would load the keys into a cluster of 800 CPUs donated by hosting giant Digital Ocean that would then start cracking them. The company also used that same donated infrastructure to help victims decrypt their data using the recovered keys.

A company offered recovery services based on this break, but was reluctant to advertise because it didn’t want Zeppelin’s creators to fix their encryption flaw.

Technical details.

NIST’s Post-Quantum Cryptography Standards

Post Syndicated from Schneier.com Webmaster original https://www.schneier.com/blog/archives/2022/08/nists-post-quantum-cryptography-standards.html

Quantum computing is a completely new paradigm for computers. A quantum computer uses quantum properties such as superposition, which allows a qubit (a quantum bit) to be neither 0 nor 1, but something much more complicated. In theory, such a computer can solve problems too complex for conventional computers.

Current quantum computers are still toy prototypes, and the engineering advances required to build a functionally useful quantum computer are somewhere between a few years away and impossible. Even so, we already know that that such a computer could potentially factor large numbers and compute discrete logs, and break the RSA and Diffie-Hellman public-key algorithms in all of the useful key sizes.

Cryptographers hate being rushed into things, which is why NIST began a competition to create a post-quantum cryptographic standard in 2016. The idea is to standardize on both a public-key encryption and digital signature algorithm that is resistant to quantum computing, well before anyone builds a useful quantum computer.

NIST is an old hand at this competitive process, having previously done this with symmetric algorithms (AES in 2001) and hash functions (SHA-3 in 2015). I participated in both of those competitions, and have likened them to demolition derbies. The idea is that participants put their algorithms into the ring, and then we all spend a few years beating on each other’s submissions. Then, with input from the cryptographic community, NIST crowns a winner. It’s a good process, mostly because NIST is both trusted and trustworthy.

In 2017, NIST received eighty-two post-quantum algorithm submissions from all over the world. Sixty-nine were considered complete enough to be Round 1 candidates. Twenty-six advanced to Round 2 in 2019, and seven (plus another eight alternates) were announced as Round 3 finalists in 2020. NIST was poised to make final algorithm selections in 2022, with a plan to have a draft standard available for public comment in 2023.

Cryptanalysis over the competition was brutal. Twenty-five of the Round 1 algorithms were attacked badly enough to remove them from the competition. Another eight were similarly attacked in Round 2. But here’s the real surprise: there were newly published cryptanalysis results against at least four of the Round 3 finalists just months ago—moments before NIST was to make its final decision.

One of the most popular algorithms, Rainbow, was found to be completely broken. Not that it could theoretically be broken with a quantum computer, but that it can be broken today—with an off-the-shelf laptop in just over two days. Three other finalists, Kyber, Saber, and Dilithium, were weakened with new techniques that will probably work against some of the other algorithms as well. (Fun fact: Those three algorithms were broken by the Center of Encryption and Information Security, part of the Israeli Defense Force. This represents the first time a national intelligence organization has published a cryptanalysis result in the open literature. And they had a lot of trouble publishing, as the authors wanted to remain anonymous.)

That was a close call, but it demonstrated that the process is working properly. Remember, this is a demolition derby. The goal is to surface these cryptanalytic results before standardization, which is exactly what happened. At this writing, NIST has chosen a single algorithm for general encryption and three digital-signature algorithms. It has not chosen a public-key encryption algorithm, and there are still four finalists. Check NIST’s webpage on the project for the latest information.

Ian Cassels, British mathematician and World War II cryptanalyst, once said that “cryptography is a mixture of mathematics and muddle, and without the muddle the mathematics can be used against you.” This mixture is particularly difficult to achieve with public-key algorithms, which rely on the mathematics for their security in a way that symmetric algorithms do not. We got lucky with RSA and related algorithms: their mathematics hinge on the problem of factoring, which turned out to be robustly difficult. Post-quantum algorithms rely on other mathematical disciplines and problems—code-based cryptography, hash-based cryptography, lattice-based cryptography, multivariate cryptography, and so on—whose mathematics are both more complicated and less well-understood. We’re seeing these breaks because those core mathematical problems aren’t nearly as well-studied as factoring is.

The moral is the need for cryptographic agility. It’s not enough to implement a single standard; it’s vital that our systems be able to easily swap in new algorithms when required. We’ve learned the hard way how algorithms can get so entrenched in systems that it can take many years to update them: in the transition from DES to AES, and the transition from MD4 and MD5 to SHA, SHA-1, and then SHA-3.

We need to do better. In the coming years we’ll be facing a double uncertainty. The first is quantum computing. When and if quantum computing becomes a practical reality, we will learn a lot about its strengths and limitations. It took a couple of decades to fully understand von Neumann computer architecture; expect the same learning curve with quantum computing. Our current understanding of quantum computing architecture will change, and that could easily result in new cryptanalytic techniques.

The second uncertainly is in the algorithms themselves. As the new cryptanalytic results demonstrate, we’re still learning a lot about how to turn hard mathematical problems into public-key cryptosystems. We have too much math and an inability to add more muddle, and that results in algorithms that are vulnerable to advances in mathematics. More cryptanalytic results are coming, and more algorithms are going to be broken.

We can’t stop the development of quantum computing. Maybe the engineering challenges will turn out to be impossible, but it’s not the way to bet. In the face of all that uncertainty, agility is the only way to maintain security.

This essay originally appeared in IEEE Security & Privacy.

EDITED TO ADD: One of the four public-key encryption algorithms selected for further research, SIKE, was just broken.

SIKE Broken

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/08/sike-broken.html

SIKE is one of the new algorithms that NIST recently added to the post-quantum cryptography competition.

It was just broken, really badly.

We present an efficient key recovery attack on the Supersingular Isogeny Diffie­-Hellman protocol (SIDH), based on a “glue-and-split” theorem due to Kani. Our attack exploits the existence of a small non-scalar endomorphism on the starting curve, and it also relies on the auxiliary torsion point information that Alice and Bob share during the protocol. Our Magma implementation breaks the instantiation SIKEp434, which aims at security level 1 of the Post-Quantum Cryptography standardization process currently ran by NIST, in about one hour on a single core.

News article.

Cryptanalysis of ENCSecurity’s Encryption Implementation

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/06/cryptanalysis-of-encsecuritys-encryption-implementation.html

ENCSecurity markets a file encryption system, and it’s used by SanDisk, Sony, Lexar, and probably others. Despite it using AES as its algorithm, its implementation is flawed in multiple ways—and breakable.

The moral is, as it always is, that implementing cryptography securely is hard. Don’t roll your own anything if you can help it.

Breaking RSA through Insufficiently Random Primes

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/03/breaking-rsa-through-insufficiently-random-primes.html

Basically, the SafeZone library doesn’t sufficiently randomize the two prime numbers it used to generate RSA keys. They’re too close to each other, which makes them vulnerable to recovery.

There aren’t many weak keys out there, but there are some:

So far, Böck has identified only a handful of keys in the wild that are vulnerable to the factorization attack. Some of the keys are from printers from two manufacturers, Canon and Fujifilm (originally branded as Fuji Xerox). Printer users can use the keys to generate a Certificate Signing Request. The creation date for the all the weak keys was 2020 or later. The weak Canon keys are tracked as CVE-2022-26351.

Böck also found four vulnerable PGP keys, typically used to encrypt email, on SKS PGP key servers. A user ID tied to the keys implied they were created for testing, so he doesn’t believe they’re in active use.

Samsung Encryption Flaw

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/03/samsung-encryption-flaw.html

Researchers have found a major encryption flaw in 100 million Samsung Galaxy phones.

From the abstract:

In this work, we expose the cryptographic design and implementation of Android’s Hardware-Backed Keystore in Samsung’s Galaxy S8, S9, S10, S20, and S21 flagship devices. We reversed-engineered and provide a detailed description of the cryptographic design and code structure, and we unveil severe design flaws. We present an IV reuse attack on AES-GCM that allows an attacker to extract hardware-protected key material, and a downgrade attack that makes even the latest Samsung devices vulnerable to the IV reuse attack. We demonstrate working key extraction attacks on the latest devices. We also show the implications of our attacks on two higher-level cryptographic protocols between the TrustZone and a remote server: we demonstrate a working FIDO2 WebAuthn login bypass and a compromise of Google’s Secure Key Import.

Here are the details:

As we discussed in Section 3, the wrapping key used to encrypt the key blobs (HDK) is derived using a salt value computed by the Keymaster TA. In v15 and v20-s9 blobs, the salt is a deterministic function that depends only on the application ID and application data (and constant strings), which the Normal World client fully controls. This means that for a given application, all key blobs will be encrypted using the same key. As the blobs are encrypted in AES-GCM mode-of-operation, the security of the resulting encryption scheme depends on its IV values never being reused.

Gadzooks. That’s a really embarrassing mistake. GSM needs a new nonce for every encryption. Samsung took a secure cipher mode and implemented it insecurely.

News article.

Decrypting Hive Ransomware Data

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/03/decrypting-hive-ransomware-data.html

Nice piece of research:

Abstract: Among the many types of malicious codes, ransomware poses a major threat. Ransomware encrypts data and demands a ransom in exchange for decryption. As data recovery is impossible if the encryption key is not obtained, some companies suffer from considerable damage, such as the payment of huge amounts of money or the loss of important data. In this paper, we analyzed Hive ransomware, which appeared in June 2021. Hive ransomware has caused immense harm, leading the FBI to issue an alert about it. To minimize the damage caused by Hive Ransomware and to help victims recover their files, we analyzed Hive Ransomware and studied recovery methods. By analyzing the encryption process of Hive ransomware, we confirmed that vulnerabilities exist by using their own encryption algorithm. We have recovered the master key for generating the file encryption key partially, to enable the decryption of data encrypted by Hive ransomware. We recovered 95% of the master key without the attacker’s RSA private key and decrypted the actual infected data. To the best of our knowledge, this is the first successful attempt at decrypting Hive ransomware. It is expected that our method can be used to reduce the damage caused by Hive ransomware.

Here’s the flaw:

The cryptographic vulnerability identified by the researchers concerns the mechanism by which the master keys are generated and stored, with the ransomware strain only encrypting select portions of the file as opposed to the entire contents using two keystreams derived from the master key.

The encryption keystream, which is created from an XOR operation of the two keystreams, is then XORed with the data in alternate blocks to generate the encrypted file. But this technique also makes it possible to guess the keystreams and restore the master key, in turn enabling the decode of encrypted files sans the attacker’s private key.

The researchers said that they were able to weaponize the flaw to devise a method to reliably recover more than 95% of the keys employed during encryption.

More Military Cryptanalytics, Part III

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/more-military-cryptanalytics-part-iii.html

Late last year, the NSA declassified and released a redacted version of Lambros D. Callimahos’s Military Cryptanalytics, Part III. We just got most of the index. It’s hard to believe that there are any real secrets left in this 44-year-old volume.

No, RSA Is Not Broken

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/03/no-rsa-is-not-broken.html

I have been seeing this paper by cryptographer Peter Schnorr making the rounds: “Fast Factoring Integers by SVP Algorithms.” It describes a new factoring method, and its abstract ends with the provocative sentence: “This destroys the RSA cryptosystem.”

It does not. At best, it’s an improvement in factoring — and I’m not sure it’s even that. The paper is a preprint: it hasn’t been peer reviewed. Be careful taking its claims at face value.

Some discussion here.

I’ll append more analysis links to this post when I find them.

Military Cryptanalytics, Part III

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/military-cryptanalytics-part-iii.html

The NSA has just declassified and released a redacted version of Military Cryptanalytics, Part III, by Lambros D. Callimahos, October 1977.

Parts I and II, by Lambros D. Callimahos and William F. Friedman, were released decades ago — I believe repeatedly, in increasingly unredacted form — and published by the late Wayne Griswold Barker’s Agean Park Press. I own them in hardcover.

Like Parts I and II, Part III is primarily concerned with pre-computer ciphers. At this point, the document only has historical interest. If there is any lesson for today, it’s that modern cryptanalysis is possible primarily because people make mistakes

The monograph took a while to become public. The cover page says that the initial FOIA request was made in July 2012: eight and a half years ago.

And there’s more books to come. Page 1 starts off:

This text constitutes the third of six basic texts on the science of cryptanalytics. The first two texts together have covered most of the necessary fundamentals of cryptanalytics; this and the remaining three texts will be devoted to more specialized and more advanced aspects of the science.

Presumably, volumes IV, V, and VI are still hidden inside the classified libraries of the NSA.

And from page ii:

Chapters IV-XI are revisions of seven of my monographs in the NSA Technical Literature Series, viz: Monograph No. 19, “The Cryptanalysis of Ciphertext and Plaintext Autokey Systems”; Monograph No. 20, “The Analysis of Systems Employing Long or Continuous Keys”; Monograph No. 21, “The Analysis of Cylindrical Cipher Devices and Strip Cipher Systems”; Monograph No. 22, “The Analysis of Systems Employing Geared Disk Cryptomechanisms”; Monograph No.23, “Fundamentals of Key Analysis”; Monograph No. 15, “An Introduction to Teleprinter Key Analysis”; and Monograph No. 18, “Ars Conjectandi: The Fundamentals of Cryptodiagnosis.”

This points to a whole series of still-classified monographs whose titles we do not even know.

EDITED TO ADD: I have been informed by a reliable source that Parts 4 through 6 were never completed. There may be fragments and notes, but no finished works.