Tag Archives: academicpapers

TPM-Fail Attacks Against Cryptographic Coprocessors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/11/tpm-fail_attack.html

Really interesting research: TPM-FAIL: TPM meets Timing and Lattice Attacks, by Daniel Moghimi, Berk Sunar, Thomas Eisenbarth, and Nadia Heninger.

Abstract: Trusted Platform Module (TPM) serves as a hardware-based root of trust that protects cryptographic keys from privileged system and physical adversaries. In this work, we per-form a black-box timing analysis of TPM 2.0 devices deployed on commodity computers. Our analysis reveals that some of these devices feature secret-dependent execution times during signature generation based on elliptic curves. In particular, we discovered timing leakage on an Intel firmware-based TPM as well as a hardware TPM. We show how this information allows an attacker to apply lattice techniques to recover 256-bit private keys for ECDSA and ECSchnorr signatures. On Intel fTPM, our key recovery succeeds after about1,300 observations and in less than two minutes. Similarly, we extract the private ECDSA key from a hardware TPM manufactured by STMicroelectronics, which is certified at CommonCriteria (CC) EAL 4+, after fewer than 40,000 observations. We further highlight the impact of these vulnerabilities by demonstrating a remote attack against a StrongSwan IPsecVPN that uses a TPM to generate the digital signatures for authentication. In this attack, the remote client recovers the server’s private authentication key by timing only 45,000 authentication handshakes via a network connection.

The vulnerabilities we have uncovered emphasize the difficulty of correctly implementing known constant-time techniques, and show the importance of evolutionary testing and transparent evaluation of cryptographic implementations.Even certified devices that claim resistance against attacks require additional scrutiny by the community and industry, as we learn more about these attacks.

These are real attacks, and take between 4-20 minutes to extract the key. Intel has a firmware update.

Attack website. News articles. Boing Boing post. Slashdot thread.

Friday Squid Blogging: Triassic Kraken

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/11/friday_squid_bl_701.html

Research paper: “Triassic Kraken: The Berlin Ichthyosaur Death Assemblage Interpreted as a Giant Cephalopod Midden“:

Abstract: The Luning Formation at Berlin Ichthyosaur State Park, Nevada, hosts a puzzling assemblage of at least 9 huge (≤14 m) juxtaposed ichthyosaurs (Shonisaurus popularis). Shonisaurs were cephalopod eating predators comparable to sperm whales (Physeter). Hypotheses presented to explain the apparent mass mortality at the site have included: tidal flat stranding, sudden burial by slope failure, and phytotoxin poisoning. Citing the wackestone matrix, J. A. Holger argued convincingly for a deeper water setting, but her phytotoxicity hypothesis cannot explain how so many came to rest at virtually the same spot. Skeletal articulation indicates that animals were deposited on the sea floor shortly after death. Currents or other factors placed them in a north south orientation. Adjacent skeletons display different taphonomic histories and degrees of disarticulation, ruling out catastrophic mass death, but allowing a scenario in which dead ichthyosaurs were sequentially transported to a sea floor midden. We hypothesize that the shonisaurs were killed and carried to the site by an enormous Triassic cephalopod, a “kraken,” with estimated length of approximately 30 m, twice that of the modern Colossal Squid Mesonychoteuthis. In this scenario, shonisaurs were ambushed by a Triassic kraken, drowned, and dumped on a midden like that of a modern octopus. Where vertebrae in the assemblage are disarticulated, disks are arranged in curious linear patterns with almost geometric regularity. Close fitting due to spinal ligament contraction is disproved by the juxtaposition of different-sized vertebrae from different parts of the vertebral column. The proposed Triassic kraken, which could have been the most intelligent invertebrate ever, arranged the vertebral discs in biserial patterns, with individual pieces nesting in a fitted fashion as if they were part of a puzzle. The arranged vertebrae resemble the pattern of sucker discs on a cephalopod tentacle, with each amphicoelous vertebra strongly resembling a coleoid sucker. Thus the tessellated vertebral disc pavement may represent the earliest known self portrait. The submarine contest between cephalopods and seagoing tetrapods has a long history. A Triassic kraken would have posed a deadly risk for shonisaurs as they dove in pursuit of their smaller cephalopod prey.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Mapping Security and Privacy Research across the Decades

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/mapping_securit.html

This is really interesting: “A Data-Driven Reflection on 36 Years of Security and Privacy Research,” by Aniqua Baset and Tamara Denning:

Abstract: Meta-research—research about research—allows us, as a community, to examine trends in our research and make informed decisions regarding the course of our future research activities. Additionally, overviews of past research are particularly useful for researchers or conferences new to the field. In this work we use topic modeling to identify topics within the field of security and privacy research using the publications of the IEEE Symposium on Security & Privacy (1980-2015), the ACM Conference on Computer and Communications Security (1993-2015), the USENIX Security Symposium (1993-2015), and the Network and Distributed System Security Symposium (1997-2015). We analyze and present data via the perspective of topics trends and authorship. We believe our work serves to contextualize the academic field of computer security and privacy research via one of the first data-driven analyses. An interactive visualization of the topics and corresponding publications is available at https://secprivmeta.net.

I like seeing how our field has morphed over the years.

Using Machine Learning to Detect IP Hijacking

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/using_machine_l_1.html

This is interesting research:

In a BGP hijack, a malicious actor convinces nearby networks that the best path to reach a specific IP address is through their network. That’s unfortunately not very hard to do, since BGP itself doesn’t have any security procedures for validating that a message is actually coming from the place it says it’s coming from.

[…]

To better pinpoint serial attacks, the group first pulled data from several years’ worth of network operator mailing lists, as well as historical BGP data taken every five minutes from the global routing table. From that, they observed particular qualities of malicious actors and then trained a machine-learning model to automatically identify such behaviors.

The system flagged networks that had several key characteristics, particularly with respect to the nature of the specific blocks of IP addresses they use:

  • Volatile changes in activity: Hijackers’ address blocks seem to disappear much faster than those of legitimate networks. The average duration of a flagged network’s prefix was under 50 days, compared to almost two years for legitimate networks.
  • Multiple address blocks: Serial hijackers tend to advertise many more blocks of IP addresses, also known as “network prefixes.”

  • IP addresses in multiple countries: Most networks don’t have foreign IP addresses. In contrast, for the networks that serial hijackers advertised that they had, they were much more likely to be registered in different countries and continents.

Note that this is much more likely to detect criminal attacks than nation-state activities. But it’s still good work.

Academic paper.

Factoring 2048-bit Numbers Using 20 Million Qubits

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/factoring_2048-.html

This theoretical paper shows how to factor 2048-bit RSA moduli with a 20-million qubit quantum computer in eight hours. It’s interesting work, but I don’t want overstate the risk.

We know from Shor’s Algorithm that both factoring and discrete logs are easy to solve on a large, working quantum computer. Both of those are currently beyond our technological abilities. We barely have quantum computers with 50 to 100 qubits. Extending this requires advances not only in the number of qubits we can work with, but in making the system stable enough to read any answers. You’ll hear this called “error rate” or “coherence” — this paper talks about “noise.”

Advances are hard. At this point, we don’t know if they’re “send a man to the moon” hard or “faster-than-light travel” hard. If I were guessing, I would say they’re the former, but still harder than we can accomplish with our current understanding of physics and technology.

I write about all this generally, and in detail, here. (Short summary: Our work on quantum-resistant algorithms is outpacing our work on quantum computers, so we’ll be fine in the short run. But future theoretical work on quantum computing could easily change what “quantum resistant” means, so it’s possible that public-key cryptography will simply not be possible in the long run. That’s not terrible, though; we have a lot of good scalable secret-key systems that do much the same things.)

More Cryptanalysis of Solitaire

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/more_cryptanaly.html

In 1999, I invented the Solitaire encryption algorithm, designed to manually encrypt data using a deck of cards. It was written into the plot of Neal Stephenson’s novel Cryptonomicon, and I even wrote an afterward to the book describing the cipher.

I don’t talk about it much, mostly because I made a dumb mistake that resulted in the algorithm not being reversible. Still, for the short message lengths you’re likely to use a manual cipher for, it’s still secure and will likely remain secure.

Here’s some new cryptanalysis:

Abstract: The Solitaire cipher was designed by Bruce Schneier as a plot point in the novel Cryptonomicon by Neal Stephenson. The cipher is intended to fit the archetype of a modern stream cipher whilst being implementable by hand using a standard deck of cards with two jokers. We find a model for repetitions in the keystream in the stream cipher Solitaire that accounts for the large majority of the repetition bias. Other phenomena merit further investigation. We have proposed modifications to the cipher that would reduce the repetition bias, but at the cost of increasing the complexity of the cipher (probably beyond the goal of allowing manual implementation). We have argued that the state update function is unlikely to lead to cycles significantly shorter than those of a random bijection.

On Cybersecurity Insurance

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/on_cybersecurit.html

Good paper on cybersecurity insurance: both the history and the promise for the future. From the conclusion:

Policy makers have long held high hopes for cyber insurance as a tool for improving security. Unfortunately, the available evidence so far should give policymakers pause. Cyber insurance appears to be a weak form of governance at present. Insurers writing cyber insurance focus more on organisational procedures than technical controls, rarely include basic security procedures in contracts, and offer discounts that only offer a marginal incentive to invest in security. However, the cost of external response services is covered, which suggests insurers believe ex-post responses to be more effective than ex-ante mitigation. (Alternatively, they can more easily translate the costs associated with ex-post responses into manageable claims.)

The private governance role of cyber insurance is limited by market dynamics. Competitive pressures drive a race-to-the-bottom in risk assessment standards and prevent insurers including security procedures in contracts. Policy interventions, such as minimum risk assessment standards, could solve this collective action problem. Policy-holders and brokers could also drive this change by looking to insurers who conduct rigorous assessments. Doing otherwise ensures adverse selection and moral hazard will increase costs for firms with responsible security postures. Moving toward standardised risk assessment via proposal forms or external scans supports the actuarial base in the long-term. But there is a danger policyholders will succumb to Goodhart’s law by internalising these metrics and optimising the metric rather than minimising risk. This is particularly likely given these assessments are constructed by private actors with their own incentives. Search-light effects may drive the scores towards being based on what can be measured, not what is important.

EDITED TO ADD (9/11): BoingBoing post.

Attacking the Intel Secure Enclave

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/attacking_the_i.html

Interesting paper by Michael Schwarz, Samuel Weiser, Daniel Gruss. The upshot is that both Intel and AMD have assumed that trusted enclaves will run only trustworthy code. Of course, that’s not true. And there are no security mechanisms that can deal with malicious enclaves, because the designers couldn’t imagine that they would be necessary. The results are predictable.

The paper: “Practical Enclave Malware with Intel SGX.”

Abstract: Modern CPU architectures offer strong isolation guarantees towards user applications in the form of enclaves. For instance, Intel’s threat model for SGX assumes fully trusted enclaves, yet there is an ongoing debate on whether this threat model is realistic. In particular, it is unclear to what extent enclave malware could harm a system. In this work, we practically demonstrate the first enclave malware which fully and stealthily impersonates its host application. Together with poorly-deployed application isolation on personal computers, such malware can not only steal or encrypt documents for extortion, but also act on the user’s behalf, e.g., sending phishing emails or mounting denial-of-service attacks. Our SGX-ROP attack uses new TSX-based memory-disclosure primitive and a write-anything-anywhere primitive to construct a code-reuse attack from within an enclave which is then inadvertently executed by the host application. With SGX-ROP, we bypass ASLR, stack canaries, and address sanitizer. We demonstrate that instead of protecting users from harm, SGX currently poses a security threat, facilitating so-called super-malware with ready-to-hit exploits. With our results, we seek to demystify the enclave malware threat and lay solid ground for future research on and defense against enclave malware.

AI Emotion-Detection Arms Race

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/ai_emotion-dete.html

Voice systems are increasingly using AI techniques to determine emotion. A new paper describes an AI-based countermeasure to mask emotion in spoken words.

Their method for masking emotion involves collecting speech, analyzing it, and extracting emotional features from the raw signal. Next, an AI program trains on this signal and replaces the emotional indicators in speech, flattening them. Finally, a voice synthesizer re-generates the normalized speech using the AIs outputs, which gets sent to the cloud. The researchers say that this method reduced emotional identification by 96 percent in an experiment, although speech recognition accuracy decreased, with a word error rate of 35 percent.

Academic paper.

How Privacy Laws Hurt Defendants

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/how_privacy_law.html

Rebecca Wexler has an interesting op-ed about an inadvertent harm that privacy laws can cause: while law enforcement can often access third-party data to aid in prosecution, the accused don’t have the same level of access to aid in their defense:

The proposed privacy laws would make this situation worse. Lawmakers may not have set out to make the criminal process even more unfair, but the unjust result is not surprising. When lawmakers propose privacy bills to protect sensitive information, law enforcement agencies lobby for exceptions so they can continue to access the information. Few lobby for the accused to have similar rights. Just as the privacy interests of poor, minority and heavily policed communities are often ignored in the lawmaking process, so too are the interests of criminal defendants, many from those same communities.

In criminal cases, both the prosecution and the accused have a right to subpoena evidence so that juries can hear both sides of the case. The new privacy bills need to ensure that law enforcement and defense investigators operate under the same rules when they subpoena digital data. If lawmakers believe otherwise, they should have to explain and justify that view.

For more detail, see her paper.

Another Attack Against Driverless Cars

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/07/another_attack_.html

In this piece of research, attackers successfully attack a driverless car system — Renault Captur’s “Level 0” autopilot (Level 0 systems advise human drivers but do not directly operate cars) — by following them with drones that project images of fake road signs in 100ms bursts. The time is too short for human perception, but long enough to fool the autopilot’s sensors.

Boing Boing post.

Research on Human Honesty

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/07/research_on_hum.html

New research from Science: “Civic honesty around the globe“:

Abstract: Civic honesty is essential to social capital and economic development, but is often in conflict with material self-interest. We examine the trade-off between honesty and self-interest using field experiments in 355 cities spanning 40 countries around the globe. We turned in over 17,000 lost wallets with varying amounts of money at public and private institutions, and measured whether recipients contacted the owner to return the wallets. In virtually all countries citizens were more likely to return wallets that contained more money. Both non-experts and professional economists were unable to predict this result. Additional data suggest our main findings can be explained by a combination of altruistic concerns and an aversion to viewing oneself as a thief, which increase with the material benefits of dishonesty.

I am surprised, too.

Hacking Hardware Security Modules

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/hacking_hardwar.html

Security researchers Gabriel Campana and Jean-Baptiste Bédrune are giving a hardware security module (HSM) talk at BlackHat in August:

This highly technical presentation targets an HSM manufactured by a vendor whose solutions are usually found in major banks and large cloud service providers. It will demonstrate several attack paths, some of them allowing unauthenticated attackers to take full control of the HSM. The presented attacks allow retrieving all HSM secrets remotely, including cryptographic keys and administrator credentials. Finally, we exploit a cryptographic bug in the firmware signature verification to upload a modified firmware to the HSM. This firmware includes a persistent backdoor that survives a firmware update.

They have an academic paper in French, and a presentation of the work. Here’s a summary in English.

There were plenty of technical challenges to solve along the way, in what was clearly a thorough and professional piece of vulnerability research:

  1. They started by using legitimate SDK access to their test HSM to upload a firmware module that would give them a shell inside the HSM. Note that this SDK access was used to discover the attacks, but is not necessary to exploit them.
  2. They then used the shell to run a fuzzer on the internal implementation of PKCS#11 commands to find reliable, exploitable buffer overflows.

  3. They checked they could exploit these buffer overflows from outside the HSM, i.e. by just calling the PKCS#11 driver from the host machine

  4. They then wrote a payload that would override access control and, via another issue in the HSM, allow them to upload arbitrary (unsigned) firmware. It’s important to note that this backdoor is persistent ­ a subsequent update will not fix it.

  5. They then wrote a module that would dump all the HSM secrets, and uploaded it to the HSM.

The Cost of Cybercrime

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/the_cost_of_cyb_1.html

Really interesting paper calculating the worldwide cost of cybercrime:

Abstract: In 2012 we presented the first systematic study of the costs of cybercrime. In this paper,we report what has changed in the seven years since. The period has seen major platform evolution, with the mobile phone replacing the PC and laptop as the consumer terminal of choice, with Android replacing Windows, and with many services moving to the cloud.The use of social networks has become extremely widespread. The executive summary is that about half of all property crime, by volume and by value, is now online. We hypothesised in 2012 that this might be so; it is now established by multiple victimisation studies.Many cybercrime patterns appear to be fairly stable, but there are some interesting changes.Payment fraud, for example, has more than doubled in value but has fallen slightly as a proportion of payment value; the payment system has simply become bigger, and slightly more efficient. Several new cybercrimes are significant enough to mention, including business email compromise and crimes involving cryptocurrencies. The move to the cloud means that system misconfiguration may now be responsible for as many breaches as phishing. Some companies have suffered large losses as a side-effect of denial-of-service worms released by state actors, such as NotPetya; we have to take a view on whether they count as cybercrime.The infrastructure supporting cybercrime, such as botnets, continues to evolve, and specific crimes such as premium-rate phone scams have evolved some interesting variants. The over-all picture is the same as in 2012: traditional offences that are now technically ‘computercrimes’ such as tax and welfare fraud cost the typical citizen in the low hundreds of Euros/dollars a year; payment frauds and similar offences, where the modus operandi has been completely changed by computers, cost in the tens; while the new computer crimes cost in the tens of cents. Defending against the platforms used to support the latter two types of crime cost citizens in the tens of dollars. Our conclusions remain broadly the same as in 2012:it would be economically rational to spend less in anticipation of cybercrime (on antivirus, firewalls, etc.) and more on response. We are particularly bad at prosecuting criminals who operate infrastructure that other wrongdoers exploit. Given the growing realisation among policymakers that crime hasn’t been falling over the past decade, merely moving online, we might reasonably hope for better funded and coordinated law-enforcement action.

Richard Clayton gave a presentation on this yesterday at WEIS. His final slide contained a summary.

  • Payment fraud is up, but credit card sales are up even more — so we’re winning.
  • Cryptocurrencies are enabling new scams, but the bit money is still being list in more traditional investment fraud.

  • Telcom fraud is down, basically because Skype is free.

  • Anti-virus fraud has almost disappeared, but tech support scams are growing very rapidly.

  • The big money is still in tax fraud, welfare fraud, VAT fraud, and so on.

  • We spend more money on cyber defense than we do on the actual losses.

  • Criminals largely act with impunity. They don’t believe they will get caught, and mostly that’s correct.

Bottom line: the technology has changed a lot since 2012, but the economic considerations remain unchanged.

Fraudulent Academic Papers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/fraudulent_acad.html

The term “fake news” has lost much of its meaning, but it describes a real and dangerous Internet trend. Because it’s hard for many people to differentiate a real news site from a fraudulent one, they can be hoodwinked by fictitious news stories pretending to be real. The result is that otherwise reasonable people believe lies.

The trends fostering fake news are more general, though, and we need to start thinking about how it could affect different areas of our lives. In particular, I worry about how it will affect academia. In addition to fake news, I worry about fake research.

An example of this seems to have happened recently in the cryptography field. SIMON is a block cipher designed by the National Security Agency (NSA) and made public in 2013. It’s a general design optimized for hardware implementation, with a variety of block sizes and key lengths. Academic cryptanalysts have been trying to break the cipher since then, with some pretty good results, although the NSA’s specified parameters are still immune to attack. Last week, a paper appeared on the International Association for Cryptologic Research (IACR) ePrint archive purporting to demonstrate a much more effective break of SIMON, one that would affect actual implementations. The paper was sufficiently weird, the authors sufficiently unknown and the details of the attack sufficiently absent, that the editors took it down a few days later. No harm done in the end.

In recent years, there has been a push to speed up the process of disseminating research results. Instead of the laborious process of academic publication, researchers have turned to faster online publishing processes, preprint servers, and simply posting research results. The IACR ePrint archive is one of those alternatives. This has all sorts of benefits, but one of the casualties is the process of peer review. As flawed as that process is, it does help ensure the accuracy of results. (Of course, bad papers can still make it through the process. We’re still dealing with the aftermath of a flawed, and now retracted, Lancet paper linking vaccines with autism.)

Like the news business, academic publishing is subject to abuse. We can only speculate the motivations of the three people who are listed as authors on the SIMON paper, but you can easily imagine better-executed and more nefarious scenarios. In a world of competitive research, one group might publish a fake result to throw other researchers off the trail. It might be a company trying to gain an advantage over a potential competitor, or even a country trying to gain an advantage over another country.

Reverting to a slower and more accurate system isn’t the answer; the world is just moving too fast for that. We need to recognize that fictitious research results can now easily be injected into our academic publication system, and tune our skepticism meters accordingly.

This essay previously appeared on Lawfare.com.

Fingerprinting iPhones

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/fingerprinting_7.html

This clever attack allows someone to uniquely identify a phone when you visit a website, based on data from the accelerometer, gyroscope, and magnetometer sensors.

We have developed a new type of fingerprinting attack, the calibration fingerprinting attack. Our attack uses data gathered from the accelerometer, gyroscope and magnetometer sensors found in smartphones to construct a globally unique fingerprint. Overall, our attack has the following advantages:

  • The attack can be launched by any website you visit or any app you use on a vulnerable device without requiring any explicit confirmation or consent from you.
  • The attack takes less than one second to generate a fingerprint.
  • The attack can generate a globally unique fingerprint for iOS devices.
  • The calibration fingerprint never changes, even after a factory reset.
  • The attack provides an effective means to track you as you browse across the web and move between apps on your phone.

* Following our disclosure, Apple has patched this vulnerability in iOS 12.2.

Research paper.

The Concept of "Return on Data"

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/the_concept_of_.html

This law review article by Noam Kolt, titled “Return on Data,” proposes an interesting new way of thinking of privacy law.

Abstract: Consumers routinely supply personal data to technology companies in exchange for services. Yet, the relationship between the utility (U) consumers gain and the data (D) they supply — “return on data” (ROD) — remains largely unexplored. Expressed as a ratio, ROD = U / D. While lawmakers strongly advocate protecting consumer privacy, they tend to overlook ROD. Are the benefits of the services enjoyed by consumers, such as social networking and predictive search, commensurate with the value of the data extracted from them? How can consumers compare competing data-for-services deals? Currently, the legal frameworks regulating these transactions, including privacy law, aim primarily to protect personal data. They treat data protection as a standalone issue, distinct from the benefits which consumers receive. This article suggests that privacy concerns should not be viewed in isolation, but as part of ROD. Just as companies can quantify return on investment (ROI) to optimize investment decisions, consumers should be able to assess ROD in order to better spend and invest personal data. Making data-for-services transactions more transparent will enable consumers to evaluate the merits of these deals, negotiate their terms and make more informed decisions. Pivoting from the privacy paradigm to ROD will both incentivize data-driven service providers to offer consumers higher ROD, as well as create opportunities for new market entrants.