Tag Archives: academicpapers

Hacking a Gene Sequencer by Encoding Malware in a DNA Strand

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/hacking_a_gene_.html

One of the common ways to hack a computer is to mess with its input data. That is, if you can feed the computer data that it interprets — or misinterprets — in a particular way, you can trick the computer into doing things that it wasn’t intended to do. This is basically what a buffer overflow attack is: the data input overflows a buffer and ends up being executed by the computer process.

Well, some researchers did this with a computer that processes DNA, and they encoded their malware in the DNA strands themselves:

To make the malware, the team translated a simple computer command into a short stretch of 176 DNA letters, denoted as A, G, C, and T. After ordering copies of the DNA from a vendor for $89, they fed the strands to a sequencing machine, which read off the gene letters, storing them as binary digits, 0s and 1s.

Erlich says the attack took advantage of a spill-over effect, when data that exceeds a storage buffer can be interpreted as a computer command. In this case, the command contacted a server controlled by Kohno’s team, from which they took control of a computer in their lab they were using to analyze the DNA file.

News articles. Research paper.

Friday Squid Blogging: Squid Eyeballs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/friday_squid_bl_588.html

Details on how a squid’s eye corrects for underwater distortion:

Spherical lenses, like the squids’, usually can’t focus the incoming light to one point as it passes through the curved surface, which causes an unclear image. The only way to correct this is by bending each ray of light differently as it falls on each location of the lens’s surface. S-crystallin, the main protein in squid lenses, evolved the ability to do this by behaving as patchy colloids­ — small molecules that have spots of molecular glue that they use to stick together in clusters.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Confusing Self-Driving Cars by Altering Road Signs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/confusing_self-.html

Researchers found that they could confuse the road sign detection algorithms of self-driving cars by adding stickers to the signs on the road. They could, for example, cause a car to think that a stop sign is a 45 mph speed limit sign. The changes are subtle, though — look at the photo from the article.

Research paper:

Robust Physical-World Attacks on Machine Learning Models,” by Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song:

Abstract: Deep neural network-based classifiers are known to be vulnerable to adversarial examples that can fool them into misclassifying their input through the addition of small-magnitude perturbations. However, recent studies have demonstrated that such adversarial examples are not very effective in the physical world–they either completely fail to cause misclassification or only work in restricted cases where a relatively complex image is perturbed and printed on paper. In this paper we propose a new attack algorithm–Robust Physical Perturbations (RP2)– that generates perturbations by taking images under different conditions into account. Our algorithm can create spatially-constrained perturbations that mimic vandalism or art to reduce the likelihood of detection by a casual observer. We show that adversarial examples generated by RP2 achieve high success rates under various conditions for real road sign recognition by using an evaluation methodology that captures physical world conditions. We physically realized and evaluated two attacks, one that causes a Stop sign to be misclassified as a Speed Limit sign in 100% of the testing conditions, and one that causes a Right Turn sign to be misclassified as either a Stop or Added Lane sign in 100% of the testing conditions.

Detecting Stingrays

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/detecting_sting.html

Researchers are developing technologies that can detect IMSI-catchers: those fake cell phone towers that can be used to surveil people in the area.

This is good work, but it’s unclear to me whether these devices can detect all the newer IMSI-catchers that are being sold to governments worldwide.

News article.

Measuring Vulnerability Rediscovery

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/07/measuring_vulne.html

New paper: “Taking Stock: Estimating Vulnerability Rediscovery,” by Trey Herr, Bruce Schneier, and Christopher Morris:

Abstract: How often do multiple, independent, parties discover the same vulnerability? There are ample models of vulnerability discovery, but little academic work on this issue of rediscovery. The immature state of this research and subsequent debate is a problem for the policy community, where the government’s decision to disclose a given vulnerability hinges in part on that vulnerability’s likelihood of being discovered and used maliciously by another party. Research into the behavior of malicious software markets and the efficacy of bug bounty programs would similarly benefit from an accurate baseline estimate for how often vulnerabilities are discovered by multiple independent parties.

This paper presents a new dataset of more than 4,300 vulnerabilities, and estimates vulnerability rediscovery across different vendors and software types. It concludes that rediscovery happens more than twice as often as the 1-9% range previously reported. For our dataset, 15% to 20% of vulnerabilities are discovered independently at least twice within a year. For just Android, 13.9% of vulnerabilities are rediscovered within 60 days, rising to 20% within 90 days, and above 21% within 120 days. For the Chrome browser we found 12.57% rediscovery within 60 days; and the aggregate rate for our entire dataset generally rises over the eight-year span, topping out at 19.6% in 2016. We believe that the actual rate is even higher for certain types of software.

When combined with an estimate of the total count of vulnerabilities in use by the NSA, these rates suggest that rediscovery of vulnerabilities kept secret by the U.S. government may be the source of up to one-third of all zero-day vulnerabilities detected in use each year. These results indicate that the information security community needs to map the impact of rediscovery on the efficacy of bug bounty programs and policymakers should more rigorously evaluate the costs of non-disclosure of software vulnerabilities.

We wrote a blog post on the paper, and another when we issued a revised version.

Comments on the original paper by Dave Aitel. News articles.

Friday Squid Blogging: Giant Squids Have Small Brains

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/07/friday_squid_bl_586.html

New research:

In this study, the optic lobe of a giant squid (Architeuthis dux, male, mantle length 89 cm), which was caught by local fishermen off the northeastern coast of Taiwan, was scanned using high-resolution magnetic resonance imaging in order to examine its internal structure. It was evident that the volume ratio of the optic lobe to the eye in the giant squid is much smaller than that in the oval squid (Sepioteuthis lessoniana) and the cuttlefish (Sepia pharaonis). Furthermore, the cell density in the cortex of the optic lobe is significantly higher in the giant squid than in oval squids and cuttlefish, with the relative thickness of the cortex being much larger in Architeuthis optic lobe than in cuttlefish. This indicates that the relative size of the medulla of the optic lobe in the giant squid is disproportionally smaller compared with these two cephalopod species.

From the New York Times:

A recent, lucky opportunity to study part of a giant squid brain up close in Taiwan suggests that, compared with cephalopods that live in shallow waters, giant squids have a small optic lobe relative to their eye size.

Furthermore, the region in their optic lobes that integrates visual information with motor tasks is reduced, implying that giant squids don’t rely on visually guided behavior like camouflage and body patterning to communicate with one another, as other cephalopods do.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

More on the NSA’s Use of Traffic Shaping

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/07/more_on_the_nsa_2.html

“Traffic shaping” — the practice of tricking data to flow through a particular route on the Internet so it can be more easily surveiled — is an NSA technique that has gotten much less attention than it deserves. It’s a powerful technique that allows an eavesdropper to get access to communications channels it would otherwise not be able to monitor.

There’s a new paper on this technique:

This report describes a novel and more disturbing set of risks. As a technical matter, the NSA does not have to wait for domestic communications to naturally turn up abroad. In fact, the agency has technical methods that can be used to deliberately reroute Internet communications. The NSA uses the term “traffic shaping” to describe any technical means the deliberately reroutes Internet traffic to a location that is better suited, operationally, to surveillance. Since it is hard to intercept Yemen’s international communications from inside Yemen itself, the agency might try to “shape” the traffic so that it passes through communications cables located on friendlier territory. Think of it as diverting part of a river to a location from which it is easier (or more legal) to catch fish.

The NSA has clandestine means of diverting portions of the river of Internet traffic that travels on global communications cables.

Could the NSA use traffic shaping to redirect domestic Internet traffic — ­emails and chat messages sent between Americans, say­ — to foreign soil, where its surveillance can be conducted beyond the purview of Congress and the courts? It is impossible to categorically answer this question, due to the classified nature of many national-security surveillance programs, regulations and even of the legal decisions made by the surveillance courts. Nevertheless, this report explores a legal, technical, and operational landscape that suggests that traffic shaping could be exploited to sidestep legal restrictions imposed by Congress and the surveillance courts.

News article. NSA document detailing the technique with Yemen.

This work builds on previous research that I blogged about here.

The fundamental vulnerability is that routing information isn’t authenticated.

A Man-in-the-Middle Attack against a Password Reset System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/07/a_man-in-the-mi.html

This is nice work: “The Password Reset MitM Attack,” by Nethanel Gelerntor, Senia Kalma, Bar Magnezi, and Hen Porcilan:

Abstract: We present the password reset MitM (PRMitM) attack and show how it can be used to take over user accounts. The PRMitM attack exploits the similarity of the registration and password reset processes to launch a man in the middle (MitM) attack at the application level. The attacker initiates a password reset process with a website and forwards every challenge to the victim who either wishes to register in the attacking site or to access a particular resource on it.

The attack has several variants, including exploitation of a password reset process that relies on the victim’s mobile phone, using either SMS or phone call. We evaluated the PRMitM attacks on Google and Facebook users in several experiments, and found that their password reset process is vulnerable to the PRMitM attack. Other websites and some popular mobile applications are vulnerable as well.

Although solutions seem trivial in some cases, our experiments show that the straightforward solutions are not as effective as expected. We designed and evaluated two secure password reset processes and evaluated them on users of Google and Facebook. Our results indicate a significant improvement in the security. Since millions of accounts are currently vulnerable to the PRMitM attack, we also present a list of recommendations for implementing and auditing the password reset process.

Password resets have long been a weak security link.

BoingBoing Post.

Friday Squid Blogging: Sex Is Traumatic for the Female Dumpling Squid

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/friday_squid_bl_580.html

The more they mate, the sooner they die. Academic paper (paywall). News article.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Safety and Security and the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/safety_and_secu.html

Ross Anderson blogged about his new paper on security and safety concerns about the Internet of Things. (See also this short video.)

It’s very much along the lines of what I’ve been writing.

Surveillance Intermediaries

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/surveillance_in_2.html

Interesting law-journal article: “Surveillance Intermediaries,” by Alan Z. Rozenshtein.

Abstract:Apple’s 2016 fight against a court order commanding it to help the FBI unlock the iPhone of one of the San Bernardino terrorists exemplifies how central the question of regulating government surveillance has become in American politics and law. But scholarly attempts to answer this question have suffered from a serious omission: scholars have ignored how government surveillance is checked by “surveillance intermediaries,” the companies like Apple, Google, and Facebook that dominate digital communications and data storage, and on whose cooperation government surveillance relies. This Article fills this gap in the scholarly literature, providing the first comprehensive analysis of how surveillance intermediaries constrain the surveillance executive. In so doing, it enhances our conceptual understanding of, and thus our ability to improve, the institutional design of government surveillance.

Surveillance intermediaries have the financial and ideological incentives to resist government requests for user data. Their techniques of resistance are: proceduralism and litigiousness that reject voluntary cooperation in favor of minimal compliance and aggressive litigation; technological unilateralism that designs products and services to make surveillance harder; and policy mobilization that rallies legislative and public opinion to limit surveillance. Surveillance intermediaries also enhance the “surveillance separation of powers”; they make the surveillance executive more subject to inter-branch constraints from Congress and the courts, and to intra-branch constraints from foreign-relations and economics agencies as well as the surveillance executive’s own surveillance-limiting components.

The normative implications of this descriptive account are important and cross-cutting. Surveillance intermediaries can both improve and worsen the “surveillance frontier”: the set of tradeoffs ­ between public safety, privacy, and economic growth ­ from which we choose surveillance policy. And while intermediaries enhance surveillance self-government when they mobilize public opinion and strengthen the surveillance separation of powers, they undermine it when their unilateral technological changes prevent the government from exercising its lawful surveillance authorities.

Spear Phishing Attacks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/spear_phishing_.html

Really interesting research: “Unpacking Spear Phishing Susceptibility,” by Zinaida Benenson, Freya Gassmann, and Robert Landwirth.

Abstract: We report the results of a field experiment where we sent to over 1200 university students an email or a Facebook message with a link to (non-existing) party pictures from a non-existing person, and later asked them about the reasons for their link clicking behavior. We registered a significant difference in clicking rates: 20% of email versus 42.5% of Facebook recipients clicked. The most frequently reported reason for clicking was curiosity (34%), followed by the explanations that the message fit recipient’s expectations (27%). Moreover, 16% thought that they might know the sender. These results show that people’s decisional heuristics are relatively easy to misuse in a targeted attack, making defense especially challenging.

Black Hat presentation on the research.

Post-Quantum RSA

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/post-quantum_rs.html

Interesting research on a version of RSA that is secure against a quantum computer:

Post-quantum RSA

Daniel J. Bernstein, Nadia Heninger, Paul Lou, and Luke Valenta

Abstract: This paper proposes RSA parameters for which (1) key generation, encryption, decryption, signing, and verification are feasible on today’s computers while (2) all known attacks are infeasible, even assuming highly scalable quantum computers. As part of the performance analysis, this paper introduces a new algorithm to generate a batch of primes. As part of the attack analysis, this paper introduces a new quantum factorization algorithm that is often much faster than Shor’s algorithm and much faster than pre-quantum factorization algorithms. Initial pqRSA implementation results are provided.

Hacking Fingerprint Readers with Master Prints

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/hacking_fingerp.html

There’s interesting research on using a set of “master” digital fingerprints to fool biometric readers. The work is theoretical at the moment, but they might be able to open about two-thirds of iPhones with these master prints.

Definitely something to keep watching.

Research paper (behind a paywall).

Using Wi-Fi to Get 3D Images of Surrounding Location

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/using_wi-fi_to_1.html

Interesting research:

The radio signals emitted by a commercial Wi-Fi router can act as a kind of radar, providing images of the transmitter’s environment, according to new experiments. Two researchers in Germany borrowed techniques from the field of holography to demonstrate Wi-Fi imaging. They found that the technique could potentially allow users to peer through walls and could provide images 10 times per second.

News article.

Using Ultrasonic Beacons to Track Users

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/using_ultrasoni.html

I’ve previously written about ad networks using ultrasonic communications to jump from one device to another. The idea is for devices like televisions to play ultrasonic codes in advertisements and for nearby smartphones to detect them. This way the two devices can be linked.

Creepy, yes. And also increasingly common, as this research demonstrates:

Privacy Threats through Ultrasonic Side Channels on Mobile Devices

by Daniel Arp, Erwin Quiring, Christian Wressnegger and Konrad Rieck

Abstract: Device tracking is a serious threat to the privacy of users, as it enables spying on their habits and activities. A recent practice embeds ultrasonic beacons in audio and tracks them using the microphone of mobile devices. This side channel allows an adversary to identify a user’s current location, spy on her TV viewing habits or link together her different mobile devices. In this paper, we explore the capabilities, the current prevalence and technical limitations of this new tracking technique based on three commercial tracking solutions. To this end, we develop detection approaches for ultrasonic beacons and Android applications capable of processing these. Our findings confirm our privacy concerns: We spot ultrasonic beacons in various web media content and detect signals in 4 of 35 stores in two European cities that are used for location tracking. While we do not find ultrasonic beacons in TV streams from 7 countries, we spot 234 Android applications that are constantly listening for ultrasonic beacons in the background without the user’s knowledge.

News article. BoingBoing post.

Friday Squid Blogging: Squid Communications

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/friday_squid_bl_576.html

In the oval squid Sepioteuthis lessoniana, males use body patterns to communicate with both females and other males:

To gain insight into the visual communication associated with each behavior in terms of the body patterning’s key components, the co-expression frequencies of two or more components at any moment in time were calculated in order to assess uniqueness when distinguishing one behavior from another. This approach identified the minimum set of key components that, when expressed together, represents an unequivocal visual communication signal. While the interpretation of the signal and the associated response of the receiver during visual communication are difficult to determine, the concept of the component assembly is similar to a typical language within which individual words often have multiple meanings, but when they appeared together with other words, the message becomes unequivocal. The present study thus demonstrates that dynamic body pattering, by expressing unique sets of key components acutely, is an efficient way of communicating behavioral information between oval squids.

News article.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Jumping Airgaps with a Laser and a Scanner

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/jumping_airgaps.html

Researchers have configured two computers to talk to each other using a laser and a scanner.

Scanners work by detecting reflected light on their glass pane. The light creates a charge that the scanner translates into binary, which gets converted into an image. But scanners are sensitive to any changes of light in a room­ — even when paper is on the glass pane or when the light source is infrared — which changes the charges that get converted to binary. This means signals can be sent through the scanner by flashing light at its glass pane using either a visible light source or an infrared laser that is invisible to human eyes.

There are a couple of caveats to the attack — the malware to decode the signals has to already be installed on a system on the network, and the lid on the scanner has to be at least partially open to receive the light. It’s not unusual for workers to leave scanner lids open after using them, however, and an attacker could also pay a cleaning crew or other worker to leave the lid open at night.

The setup is that there’s malware on the computer connected to the scanner, and that computer isn’t on the Internet. This technique allows an attacker to communicate with that computer. For extra coolness, the laser can be mounted on a drone.

Here’s the paper. And two videos.

Stealing Browsing History Using Your Phone’s Ambient Light Sensor

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/stealing_browsi.html

There has been a flurry of research into using the various sensors on your phone to steal data in surprising ways. Here’s another: using the phone’s ambient light sensor to detect what’s on the screen. It’s a proof of concept, but the paper’s general conclusions are correct:

There is a lesson here that designing specifications and systems from a privacy engineering perspective is a complex process: decisions about exposing sensitive APIs to the web without any protections should not be taken lightly. One danger is that specification authors and browser vendors will base decisions on overly general principles and research results which don’t apply to a particular new feature (similarly to how protections on gyroscope readings might not be sufficient for light sensor data).

Reading Analytics and Privacy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/reading_analyti.html

Interesting paper: “The rise of reading analytics and the emerging calculus of reading privacy in the digital world,” by Clifford Lynch:

Abstract: This paper studies emerging technologies for tracking reading behaviors (“reading analytics”) and their implications for reader privacy, attempting to place them in a historical context. It discusses what data is being collected, to whom it is available, and how it might be used by various interested parties (including authors). I explore means of tracking what’s being read, who is doing the reading, and how readers discover what they read. The paper includes two case studies: mass-market e-books (both directly acquired by readers and mediated by libraries) and scholarly journals (usually mediated by academic libraries); in the latter case I also provide examples of the implications of various authentication, authorization and access management practices on reader privacy. While legal issues are touched upon, the focus is generally pragmatic, emphasizing technology and marketplace practices. The article illustrates the way reader privacy concerns are shifting from government to commercial surveillance, and the interactions between government and the private sector in this area. The paper emphasizes U.S.-based developments.