All posts by Bruce Schneier

Crown Sterling Claims to Factor RSA Keylengths First Factored Twenty Years Ago

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/crown_sterling_.html

Earlier this month I made fun of a company called Crown-Sterling, for…for…for being a company that deserves being made fun of.

This morning, the company announced that they “decrypted two 256-bit asymmetric public keys in approximately 50 seconds from a standard laptop computer.” Really. They did. This keylength is so small it has never been considered secure. It was too small to be part of the RSA Factoring Challenge when it was introduced in 1991. In 1977, when Ron Rivest, Adi Shamir, and Len Adelman first described RSA, they included a challenge with a 426-bit key. (It was factored in 1994.)

The press release goes on: “Crown Sterling also announced the consistent decryption of 512-bit asymmetric public key in as little as five hours also using standard computing.” They didn’t demonstrate it, but if they’re right they’ve matched a factoring record set in 1999. Five hours is significantly less than the 5.2 months it took in 1999, but slower than would be expected if Crown-Sterling just used the 1999 techniques with modern CPUs and networks.

Is anyone taking this company seriously anymore? I honestly wouldn’t be surprised if this was a hoax press release. It’s not currently on the company’s website. (And, if it is a hoax, I apologize to Crown Sterling. I’ll post a retraction as soon as I hear from you.)

EDITED TO ADD: First, the press release is real. And second, I forgot to include the quote from CEO Robert Grant: “Today’s decryptions demonstrate the vulnerabilities associated with the current encryption paradigm. We have clearly demonstrated the problem which also extends to larger keys.”

People, this isn’t hard. Find an RSA Factoring Challenge number that hasn’t been factored yet and factor it. Once you do, the entire world will take you seriously. Until you do, no one will. And, bonus, you won’t have to reveal your super-secret world-destabalizing cryptanalytic techniques.

EDITED TO ADD (9/21): Others are laughing at this, too.

I’m Looking to Hire a Strategist to Help Figure Out Public-Interest Tech

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/im_looking_to_h.html

I am in search of a strategic thought partner: a person who can work closely with me over the next 9 to 12 months in assessing what’s needed to advance the practice, integration, and adoption of public-interest technology.

All of the details are in the RFP. The selected strategist will work closely with me on a number of clear deliverables. This is a contract position that could possibly become a salaried position in a subsequent phase, and under a different agreement.

I’m working with the team at Yancey Consulting, who will follow up with all proposers and manage the process. Please email Lisa Yancey at [email protected]

Cracking Forgotten Passwords

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/cracking_forgot.html

Expandpass is a string expansion program. It’s “useful for cracking passwords you kinda-remember.” You tell the program what you remember about the password and it tries related passwords.

I learned about it in this article about Phil Dougherty, who helps people recover lost cryptocurrency passwords (mostly Ethereum) for a cut of the recovered value.

Another Side Channel in Intel Chips

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/another_side_ch.html

Not that serious, but interesting:

In late 2011, Intel introduced a performance enhancement to its line of server processors that allowed network cards and other peripherals to connect directly to a CPU’s last-level cache, rather than following the standard (and significantly longer) path through the server’s main memory. By avoiding system memory, Intel’s DDIO­short for Data-Direct I/O­increased input/output bandwidth and reduced latency and power consumption.

Now, researchers are warning that, in certain scenarios, attackers can abuse DDIO to obtain keystrokes and possibly other types of sensitive data that flow through the memory of vulnerable servers. The most serious form of attack can take place in data centers and cloud environments that have both DDIO and remote direct memory access enabled to allow servers to exchange data. A server leased by a malicious hacker could abuse the vulnerability to attack other customers. To prove their point, the researchers devised an attack that allows a server to steal keystrokes typed into the protected SSH (or secure shell session) established between another server and an application server.

Upcoming Speaking Engagements

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/upcoming_speaki_8.html

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

When Biology Becomes Software

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/when_biology_be.html

All of life is based on the coordinated action of genetic parts (genes and their controlling sequences) found in the genomes (the complete DNA sequence) of organisms.

Genes and genomes are based on code– just like the digital language of computers. But instead of zeros and ones, four DNA letters — A, C, T, G — encode all of life. (Life is messy, and there are actually all sorts of edge cases, but ignore that for now.) If you have the sequence that encodes an organism, in theory, you could recreate it. If you can write new working code, you can alter an existing organism or create a novel one.

If this sounds to you a lot like software coding, you’re right. As synthetic biology looks more like computer technology, the risks of the latter become the risks of the former. Code is code, but because we’re dealing with molecules — and sometimes actual forms of life — the risks can be much greater.

Imagine a biological engineer trying to increase the expression of a gene that maintains normal gene function in blood cells. Even though it’s a relatively simple operation by today’s standards, it’ll almost certainly take multiple tries to get it right. Were this computer code, the only damage those failed tries would do is to crash the computer they’re running on. With a biological system, the code could instead increase the likelihood of multiple types of leukemias and wipe out cells important to the patient’s immune system.

We have known the mechanics of DNA for some 60 plus years. The field of modern biotechnology began in 1972 when Paul Berg joined one virus gene to another and produced the first “recombinant” virus. Synthetic biology arose in the early 2000s when biologists adopted the mindset of engineers; instead of moving single genes around, they designed complex genetic circuits.

In 2010 Craig Venter and his colleagues recreated the genome of a simple bacterium. More recently, researchers at the Medical Research Council Laboratory of Molecular Biology in Britain created a new, more streamlined version of E. coli. In both cases the researchers created what could arguably be called new forms of life.

This is the new bioengineering, and it will only get more powerful. Today you can write DNA code in the same way a computer programmer writes computer code. Then you can use a DNA synthesizer or order DNA from a commercial vendor, and then use precision editing tools such as CRISPR to “run” it in an already existing organism, from a virus to a wheat plant to a person.

In the future, it may be possible to build an entire complex organism such as a dog or cat, or recreate an extinct mammoth (currently underway). Today, biotech companies are developing new gene therapies, and international consortia are addressing the feasibility and ethics of making changes to human genomes that could be passed down to succeeding generations.

Within the biological science community, urgent conversations are occurring about “cyberbiosecurity,” an admittedly contested term which exists between biological and information systems where vulnerabilities in one can affect the other. These can include the security of DNA databanks, the fidelity of transmission of those data, and information hazards associated with specific DNA sequences that could encode novel pathogens for which no cures exist.

These risks have occupied not only learned bodies — the National Academies of Sciences, Engineering, and Medicine published at least a half dozen reports on biosecurity risks and how to address them proactively — but have made it to mainstream media: genome editing was a major plot element in Netflix’s Season 3 of “Designated Survivor.”

Our worries are more prosaic. As synthetic biology “programming” reaches the complexity of traditional computer programming, the risks of computer systems will transfer to biological systems. The difference is that biological systems have the potential to cause much greater, and far more lasting, damage than computer systems.

Programmers write software through trial and error. Because computer systems are so complex and there is no real theory of software, programmers repeatedly test the code they write until it works properly. This makes sense, because both the cost of getting it wrong and the ease of trying again is so low. There are even jokes about this: a programmer would diagnose a car crash by putting another car in the same situation and seeing if it happens again.

Even finished code still has problems. Again due to the complexity of modern software systems, “works properly” doesn’t mean that it’s perfectly correct. Modern software is full of bugs — thousands of software flaws — that occasionally affect performance or security. That’s why any piece of software you use is regularly updated; the developers are still fixing bugs, even after the software is released.

Bioengineering will be largely the same: writing biological code will have these same reliability properties. Unfortunately, the software solution of making lots of mistakes and fixing them as you go doesn’t work in biology.

In nature, a similar type of trial and error is handled by “the survival of the fittest” and occurs slowly over many generations. But human-generated code from scratch doesn’t have that kind of correction mechanism. Inadvertent or intentional release of these newly coded “programs” may result in pathogens of expanded host range (just think swine flu) or organisms that wreck delicate ecological balances.

Unlike computer software, there’s no way so far to “patch” biological systems once released to the wild, although researchers are trying to develop one. Nor are there ways to “patch” the humans (or animals or crops) susceptible to such agents. Stringent biocontainment helps, but no containment system provides zero risk.

Opportunities for mischief and malfeasance often occur when expertise is siloed, fields intersect only at the margins, and when the gathered knowledge of small, expert groups doesn’t make its way into the larger body of practitioners who have important contributions to make.

Good starts have been made by biologists, security agencies, and governance experts. But these efforts have tended to be siloed, in either the biological and digital spheres of influence, classified and solely within the military, or exchanged only among a very small set of investigators.

What we need is more opportunities for integration between the two disciplines. We need to share information and experiences, classified and unclassified. We have tools among our digital and biological communities to identify and mitigate biological risks, and those to write and deploy secure computer systems.

Those opportunities will not occur without effort or financial support. Let’s find those resources, public, private, philanthropic, or any combination. And then let’s use those resources to set up some novel opportunities for digital geeks and bionerds — as well as ethicists and policymakers — to share experiences, concerns, and come up with creative, constructive solutions to these problems that are more than just patches.

These are overarching problems; let’s not let siloed thinking or funding get in the way of breaking down barriers between communities. And let’s not let technology of any kind get in the way of the public good.

This essay previously appeared on CNN.com.

Fabricated Voice Used in Financial Fraud

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/fabricated_voic.html

This seems to be an identity theft first:

Criminals used artificial intelligence-based software to impersonate a chief executive’s voice and demand a fraudulent transfer of €220,000 ($243,000) in March in what cybercrime experts described as an unusual case of artificial intelligence being used in hacking.

Another news article.

More on Law Enforcement Backdoor Demands

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/more_on_law_enf.html

The Carnegie Endowment for International Peace and Princeton University’s Center for Information Technology Policy convened an Encryption Working Group to attempt progress on the “going dark” debate. They have released their report: “Moving the Encryption Policy Conversation Forward.

The main contribution seems to be that attempts to backdoor devices like smartphones shouldn’t also backdoor communications systems:

Conclusion: There will be no single approach for requests for lawful access that can be applied to every technology or means of communication. More work is necessary, such as that initiated in this paper, to separate the debate into its component parts, examine risks and benefits in greater granularity, and seek better data to inform the debate. Based on our attempt to do this for one particular area, the working group believes that some forms of access to encrypted information, such as access to data at rest on mobile phones, should be further discussed. If we cannot have a constructive dialogue in that easiest of cases, then there is likely none to be had with respect to any of the other areas. Other forms of access to encrypted information, including encrypted data-in-motion, may not offer an achievable balance of risk vs. benefit, and as such are not worth pursuing and should not be the subject of policy changes, at least for now. We believe that to be productive, any approach must separate the issue into its component parts.

I don’t believe that backdoor access to encryption data at rest offers “an achievable balance of risk vs. benefit” either, but I agree that the two aspects should be treated independently.

EDITED TO ADD (9/12): This report does an important job moving the debate forward. It advises that policymakers break the issues into component parts. Instead of talking about restricting all encryption, it separates encrypted data at rest (storage) from encrypted data in motion (communication). It advises that policymakers pick the problems they have some chance of solving, and not demand systems that put everyone in danger. For example: no key escrow, and no use of software updates to break into devices).

Data in motion poses challenges that are not present for data at rest. For example, modern cryptographic protocols for data in motion use a separate “session key” for each message, unrelated to the private/public key pairs used to initiate communication, to preserve the message’s secrecy independent of other messages (consistent with a concept known as “forward secrecy”). While there are potential techniques for recording, escrowing, or otherwise allowing access to these session keys, by their nature, each would break forward secrecy and related concepts and would create a massive target for criminal and foreign intelligence adversaries. Any technical steps to simplify the collection or tracking of session keys, such as linking keys to other keys or storing keys after they are used, would represent a fundamental weakening of all the communications.

These are all big steps forward given who signed on to the report. Not just the usual suspects, but also Jim Baker — former general counsel of the FBI — and Chris Inglis: former deputy director of the NSA.

On Cybersecurity Insurance

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/on_cybersecurit.html

Good paper on cybersecurity insurance: both the history and the promise for the future. From the conclusion:

Policy makers have long held high hopes for cyber insurance as a tool for improving security. Unfortunately, the available evidence so far should give policymakers pause. Cyber insurance appears to be a weak form of governance at present. Insurers writing cyber insurance focus more on organisational procedures than technical controls, rarely include basic security procedures in contracts, and offer discounts that only offer a marginal incentive to invest in security. However, the cost of external response services is covered, which suggests insurers believe ex-post responses to be more effective than ex-ante mitigation. (Alternatively, they can more easily translate the costs associated with ex-post responses into manageable claims.)

The private governance role of cyber insurance is limited by market dynamics. Competitive pressures drive a race-to-the-bottom in risk assessment standards and prevent insurers including security procedures in contracts. Policy interventions, such as minimum risk assessment standards, could solve this collective action problem. Policy-holders and brokers could also drive this change by looking to insurers who conduct rigorous assessments. Doing otherwise ensures adverse selection and moral hazard will increase costs for firms with responsible security postures. Moving toward standardised risk assessment via proposal forms or external scans supports the actuarial base in the long-term. But there is a danger policyholders will succumb to Goodhart’s law by internalising these metrics and optimising the metric rather than minimising risk. This is particularly likely given these assessments are constructed by private actors with their own incentives. Search-light effects may drive the scores towards being based on what can be measured, not what is important.

EDITED TO ADD (9/11): BoingBoing post.

The Doghouse: Crown Sterling

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/the_doghouse_cr_1.html

A decade ago, the Doghouse was a regular feature in both my email newsletter Crypto-Gram and my blog. In it, I would call out particularly egregious — and amusing — examples of cryptographic “snake oil.”

I dropped it both because it stopped being fun and because almost everyone converged on standard cryptographic libraries, which meant standard non-snake-oil cryptography. But every so often, a new company comes along that is so ridiculous, so nonsensical, so bizarre, that there is nothing to do but call it out.

Crown Sterling is complete and utter snake oil. The company sells “TIME AI,” “the world’s first dynamic ‘non-factor’ based quantum AI encryption software,” “utilizing multi-dimensional encryption technology, including time, music’s infinite variability, artificial intelligence, and most notably mathematical constancies to generate entangled key pairs.” Those sentence fragments tick three of my snake-oil warning signs — from 1999! — right there: pseudo-math gobbledygook (warning sign #1), new mathematics (warning sign #2), and extreme cluelessness (warning sign #4).

More: “In March of 2019, Grant identified the first Infinite Prime Number prediction pattern, where the discovery was published on Cornell University’s www.arXiv.org titled: ‘Accurate and Infinite Prime Number Prediction from Novel Quasi-Prime Analytical Methodology.’ The paper was co-authored by Physicist and Number Theorist Talal Ghannam PhD. The discovery challenges today’s current encryption framework by enabling the accurate prediction of prime numbers.” Note the attempt to leverage Cornell’s reputation, even though the preprint server is not peer-reviewed and allows anyone to upload anything. (That should be another warning sign: undeserved appeals to authority.) PhD student Mark Carney took the time to refute it. Most of it is wrong, and what’s right isn’t new.

I first encountered the company earlier this year. In January, Tom Yemington from the company emailed me, asking to talk. “The founder and CEO, Robert Grant is a successful healthcare CEO and amateur mathematician that has discovered a method for cracking asymmetric encryption methods that are based on the difficulty of finding the prime factors of a large quasi-prime numbers. Thankfully the newly discovered math also provides us with much a stronger approach to encryption based on entangled-pairs of keys.” Sounds like complete snake-oil, right? I responded as I usually do when companies contact me, which is to tell them that I’m too busy.

In April, a colleague at IBM suggested I talk with the company. I poked around at the website, and sent back: “That screams ‘snake oil.’ Bet you a gazillion dollars they have absolutely nothing of value — and that none of their tech people have any cryptography expertise.” But I thought this might be an amusing conversation to have. I wrote back to Yemington. I never heard back — LinkedIn suggests he left in April — and forgot about the company completely until it surfaced at Black Hat this year.

Robert Grant, president of Crown Sterling, gave a sponsored talk: “The 2019 Discovery of Quasi-Prime Numbers: What Does This Mean For Encryption?” I didn’t see it, but it was widely criticized and heckled. Black Hat was so embarrassed that it removed the presentation from the conference website. (Parts of it remain on the Internet. Here’s a short video from the company, if you want to laugh along with everyone else at terms like “infinite wave conjugations” and “quantum AI encryption.” Or you can read the company’s press release about what happened at Black Hat, or Grant’s Twitter feed.)

Grant has no cryptographic credentials. His bio — on the website of something called the “Resonance Science Foundation” — is all over the place: “He holds several patents in the fields of photonics, electromagnetism, genetic combinatorics, DNA and phenotypic expression, and cybernetic implant technologies. Mr. Grant published and confirmed the existence of quasi-prime numbers (a new classification of prime numbers) and their infinite pattern inherent to icositetragonal geometry.”

Grant’s bio on the Crown Sterling website contains this sentence, absolutely beautiful in its nonsensical use of mathematical terms: “He has multiple publications in unified mathematics and physics related to his discoveries of quasi-prime numbers (a new classification for prime numbers), the world’s first predictive algorithm determining infinite prime numbers, and a unification wave-based theory connecting and correlating fundamental mathematical constants such as Pi, Euler, Alpha, Gamma and Phi.” (Quasi-primes are real, and they’re not new. They’re numbers with only large prime factors, like RSA moduli.)

Near as I can tell, Grant’s coauthor is the mathematician of the company: “Talal Ghannam — a physicist who has self-published a book called The Mystery of Numbers: Revealed through their Digital Root as well as a comic book called The Chronicles of Maroof the Knight: The Byzantine.” Nothing about cryptography.

There seems to be another technical person. Ars Technica writes: “Alan Green (who, according to the Resonance Foundation website, is a research team member and adjunct faculty for the Resonance Academy) is a consultant to the Crown Sterling team, according to a company spokesperson. Until earlier this month, Green — a musician who was ‘musical director for Davy Jones of The Monkees’ — was listed on the Crown Sterling website as Director of Cryptography. Green has written books and a musical about hidden codes in the sonnets of William Shakespeare.”

None of these people have demonstrated any cryptographic credentials. No papers, no research, no nothing. (And, no, self-publishing doesn’t count.)

After the Black Hat talk, Grant — and maybe some of those others — sat down with Ars Technica and spun more snake oil. They claimed that the patterns they found in prime numbers allows them to break RSA. They’re not publishing their results “because Crown Sterling’s team felt it would be irresponsible to disclose discoveries that would break encryption.” (Snake-oil warning sign #7: unsubstantiated claims.) They also claim to have “some very, very strong advisors to the company” who are “experts in the field of cryptography, truly experts.” The only one they name is Larry Ponemon, who is a privacy researcher and not a cryptographer at all.

Enough of this. All of us can create ciphers that we cannot break ourselves, which means that amateur cryptographers regularly produce amateur cryptography. These guys are amateurs. Their math is amateurish. Their claims are nonsensical. Run away. Run, far, far, away.

But be careful how loudly you laugh when you do. Not only is the company ridiculous, it’s litigious as well. It has sued ten unnamed “John Doe” defendants for booing the Black Hat talk. (It also sued Black Hat, which may have more merit. The company paid $115K to have its talk presented amongst actual peer-reviewed talks. For Black Hat to remove its nonsense may very well be a breach of contract.)

Maybe Crown Sterling can file a meritless lawsuit against me instead for this post. I’m sure it would think it’d result in all sorts of positive press coverage. (Although any press is good press, so maybe it’s right.) But if I can prevent others from getting taken in by this stuff, it would be a good thing.

Massive iPhone Hack Targets Uyghurs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/massive_iphone_.html

China is being blamed for a massive surveillance operation that targeted Uyghur Muslims. This story broke in waves, the first wave being about the iPhone.

Earlier this year, Google’s Project Zero found a series of websites that have been using zero-day vulnerabilities to indiscriminately install malware on iPhones that would visit the site. (The vulnerabilities were patched in iOS 12.1.4, released on February 7.)

Earlier this year Google’s Threat Analysis Group (TAG) discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day.

There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant. We estimate that these sites receive thousands of visitors per week.

TAG was able to collect five separate, complete and unique iPhone exploit chains, covering almost every version from iOS 10 through to the latest version of iOS 12. This indicated a group making a sustained effort to hack the users of iPhones in certain communities over a period of at least two years.

Four more news stories.

This upends pretty much everything we know about iPhone hacking. We believed that it was hard. We believed that effective zero-day exploits cost $2M or $3M, and were used sparingly by governments only against high-value targets. We believed that if an exploit was used too frequently, it would be quickly discovered and patched.

None of that is true here. This operation used fourteen zero-days exploits. It used them indiscriminately. And it remained undetected for two years. (I waited before posting this because I wanted to see if someone would rebut this story, or explain it somehow.)

Google’s announcement left out of details, like the URLs of the sites delivering the malware. That omission meant that we had no idea who was behind the attack, although the speculation was that it was a nation-state.

Subsequent reporting added that malware against Android phones and the Windows operating system were also delivered by those websites. And then that the websites were targeted at Uyghurs. Which leads us all to blame China.

So now this is a story of a large, expensive, indiscriminate, Chinese-run surveillance operation against an ethnic minority in their country. And the politics will overshadow the tech. But the tech is still really impressive.

EDITED TO ADD: New data on the value of smartphone exploits:

According to the company, starting today, a zero-click (no user interaction) exploit chain for Android can get hackers and security researchers up to $2.5 million in rewards. A similar exploit chain impacting iOS is worth only $2 million.

EDITED TO ADD (9/6): Apple disputes some of the claims Google made about the extent of the vulnerabilities and the attack.

EDITED TO ADD (9/7): More on Apple’s pushbacks.