Tag Archives: academicpapers

Identifying People by Their Browsing Histories

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/identifying_peo_9.html

Interesting paper: “Replication: Why We Still Can’t Browse in Peace: On the Uniqueness and Reidentifiability of Web Browsing Histories“:

We examine the threat to individuals’ privacy based on the feasibility of reidentifying users through distinctive profiles of their browsing history visible to websites and third parties. This work replicates and extends the 2012 paper Why Johnny Can’t Browse in Peace: On the Uniqueness of Web Browsing History Patterns[48]. The original work demonstrated that browsing profiles are highly distinctive and stable.We reproduce those results and extend the original work to detail the privacy risk posed by the aggregation of browsing histories. Our dataset consists of two weeks of browsing data from ~52,000 Firefox users. Our work replicates the original paper’s core findings by identifying 48,919 distinct browsing profiles, of which 99% are unique. High uniqueness hold seven when histories are truncated to just 100 top sites. Wethen find that for users who visited 50 or more distinct do-mains in the two-week data collection period, ~50% can be reidentified using the top 10k sites. Reidentifiability rose to over 80% for users that browsed 150 or more distinct domains.Finally, we observe numerous third parties pervasive enough to gather web histories sufficient to leverage browsing history as an identifier.

One of the authors of the original study comments on the replication.

Using Disinformation to Cause a Blackout

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/using_disinform.html

Interesting paper: “How weaponizing disinformation can bring down a city’s power grid“:

Abstract: Social media has made it possible to manipulate the masses via disinformation and fake news at an unprecedented scale. This is particularly alarming from a security perspective, as humans have proven to be one of the weakest links when protecting critical infrastructure in general, and the power grid in particular. Here, we consider an attack in which an adversary attempts to manipulate the behavior of energy consumers by sending fake discount notifications encouraging them to shift their consumption into the peak-demand period. Using Greater London as a case study, we show that such disinformation can indeed lead to unwitting consumers synchronizing their energy-usage patterns, and result in blackouts on a city-scale if the grid is heavily loaded. We then conduct surveys to assess the propensity of people to follow-through on such notifications and forward them to their friends. This allows us to model how the disinformation may propagate through social networks, potentially amplifying the attack impact. These findings demonstrate that in an era when disinformation can be weaponized, system vulnerabilities arise not only from the hardware and software of critical infrastructure, but also from the behavior of the consumers.

I’m not sure the attack is practical, but it’s an interesting idea.

UAE Hack and Leak Operations

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/uae_hack_and_le.html

Interesting paper on recent hack-and-leak operations attributed to the UAE:

Abstract: Four hack-and-leak operations in U.S. politics between 2016 and 2019, publicly attributed to the United Arab Emirates (UAE), Qatar, and Saudi Arabia, should be seen as the “simulation of scandal” ­– deliberate attempts to direct moral judgement against their target. Although “hacking” tools enable easy access to secret information, they are a double-edged sword, as their discovery means the scandal becomes about the hack itself, not about the hacked information. There are wider consequences for cyber competition in situations of constraint where both sides are strategic partners, as in the case of the United States and its allies in the Persian Gulf.

Adversarial Machine Learning and the CFAA

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/adversarial_mac_1.html

I just co-authored a paper on the legal risks of doing machine learning research, given the current state of the Computer Fraud and Abuse Act:

Abstract: Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask, “What are the potential legal risks to adversarial ML researchers when they attack ML systems?” Studying or testing the security of any operational system potentially runs afoul the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. We claim that Adversarial ML research is likely no different. Our analysis show that because there is a split in how CFAA is interpreted, aspects of adversarial ML attacks, such as model inversion, membership inference, model stealing, reprogramming the ML system and poisoning attacks, may be sanctioned in some jurisdictions and not penalized in others. We conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA’s application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.

Medium post on the paper. News article, which uses our graphic without attribution.

Fawkes: Digital Image Cloaking

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/fawkes_digital_.html

Fawkes is a system for manipulating digital images so that they aren’t recognized by facial recognition systems.

At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these “cloaked” photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.

Research paper.

Hacking a Power Supply

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/hacking_a_power.html

This hack targets the firmware on modern power supplies. (Yes, power supplies are also computers.)

Normally, when a phone is connected to a power brick with support for fast charging, the phone and the power adapter communicate with each other to determine the proper amount of electricity that can be sent to the phone without damaging the device­ — the more juice the power adapter can send, the faster it can charge the phone.

However, by hacking the fast charging firmware built into a power adapter, Xuanwu Labs demonstrated that bad actors could potentially manipulate the power brick into sending more electricity than a phone can handle, thereby overheating the phone, melting internal components, or as Xuanwu Labs discovered, setting the device on fire.

Research paper, in Chinese.

Securing the International IoT Supply Chain

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/securing_the_in_1.html

Together with Nate Kim (former student) and Trey Herr (Atlantic Council Cyber Statecraft Initiative), I have written a paper on IoT supply chain security. The basic problem we try to solve is: how to you enforce IoT security regulations when most of the stuff is made in other countries? And our solution is: enforce the regulations on the domestic company that’s selling the stuff to consumers. There’s a lot of detail between here and there, though, and it’s all in the paper.

We also wrote a Lawfare post:

…we propose to leverage these supply chains as part of the solution. Selling to U.S. consumers generally requires that IoT manufacturers sell through a U.S. subsidiary or, more commonly, a domestic distributor like Best Buy or Amazon. The Federal Trade Commission can apply regulatory pressure to this distributor to sell only products that meet the requirements of a security framework developed by U.S. cybersecurity agencies. That would put pressure on manufacturers to make sure their products are compliant with the standards set out in this security framework, including pressuring their component vendors and original device manufacturers to make sure they supply parts that meet the recognized security framework.

News article.

The Unintended Harms of Cybersecurity

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/the_unintended_.html

Interesting research: “Identifying Unintended Harms of Cybersecurity Countermeasures“:

Abstract: Well-meaning cybersecurity risk owners will deploy countermeasures (technologies or procedures) to manage risks to their services or systems. In some cases, those countermeasures will produce unintended consequences, which must then be addressed. Unintended consequences can potentially induce harm, adversely affecting user behaviour, user inclusion, or the infrastructure itself (including other services or countermeasures). Here we propose a framework for preemptively identifying unintended harms of risk countermeasures in cybersecurity.The framework identifies a series of unintended harms which go beyond technology alone, to consider the cyberphysical and sociotechnical space: displacement, insecure norms, additional costs, misuse, misclassification, amplification, and disruption. We demonstrate our framework through application to the complex,multi-stakeholder challenges associated with the prevention of cyberbullying as an applied example. Our framework aims to illuminate harmful consequences, not to paralyze decision-making, but so that potential unintended harms can be more thoroughly considered in risk management strategies. The framework can support identification and preemptive planning to identify vulnerable populations and preemptively insulate them from harm. There are opportunities to use the framework in coordinating risk management strategy across stakeholders in complex cyberphysical environments.

Security is always a trade-off. I appreciate work that examines the details of that trade-off.

Analyzing IoT Security Best Practices

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/analyzing_iot_s.html

New research: “Best Practices for IoT Security: What Does That Even Mean?” by Christopher Bellman and Paul C. van Oorschot:

Abstract: Best practices for Internet of Things (IoT) security have recently attracted considerable attention worldwide from industry and governments, while academic research has highlighted the failure of many IoT product manufacturers to follow accepted practices. We explore not the failure to follow best practices, but rather a surprising lack of understanding, and void in the literature, on what (generically) “best practice” means, independent of meaningfully identifying specific individual practices. Confusion is evident from guidelines that conflate desired outcomes with security practices to achieve those outcomes. How do best practices, good practices, and standard practices differ? Or guidelines, recommendations, and requirements? Can something be a best practice if it is not actionable? We consider categories of best practices, and how they apply over the lifecycle of IoT devices. For concreteness in our discussion, we analyze and categorize a set of 1014 IoT security best practices, recommendations, and guidelines from industrial, government, and academic sources. As one example result, we find that about 70\% of these practices or guidelines relate to early IoT device lifecycle stages, highlighting the critical position of manufacturers in addressing the security issues in question. We hope that our work provides a basis for the community to build on in order to better understand best practices, identify and reach consensus on specific practices, and then find ways to motivate relevant stakeholders to follow them.

Back in 2017, I catalogued nineteen security and privacy guideline documents for the Internet of Things. Our problem right now isn’t that we don’t know how to secure these devices, it’s that there is no economic or regulatory incentive to do so.

Cryptocurrency Pump and Dump Scams

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/cryptocurrency_.html

Really interesting research: “An examination of the cryptocurrency pump and dump ecosystem“:

Abstract: The surge of interest in cryptocurrencies has been accompanied by a proliferation of fraud. This paper examines pump and dump schemes. The recent explosion of nearly 2,000 cryptocurrencies in an unregulated environment has expanded the scope for abuse. We quantify the scope of cryptocurrency pump and dump schemes on Discord and Telegram, two popular group-messaging platforms. We joined all relevant Telegram and Discord groups/channels and identified thousands of different pumps. Our findings provide the first measure of the scope of such pumps and empirically document important properties of this ecosystem.

Eavesdropping on Sound Using Variations in Light Bulbs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/eavesdropping_o_9.html

New research is able to recover sound waves in a room by observing minute changes in the room’s light bulbs. This technique works from a distance, even from a building across the street through a window.

Details:

In an experiment using three different telescopes with different lens diameters from a distance of 25 meters (a little over 82 feet) the researchers were successfully able to capture sound being played in a remote room, including The Beatles’ Let It Be, which was distinguishable enough for Shazam to recognize it, and a speech from President Trump that Google’s speech recognition API could successfully transcribe. With more powerful telescopes and a more sensitive analog-to-digital converter, the researchers believe the eavesdropping distances could be even greater.

It’s not expensive: less than $1,000 worth of equipment is required. And unlike other techniques like bouncing a laser off the window and measuring the vibrations, it’s completely passive.

News articles.

Availability Attacks against Neural Networks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/availability_at.html

New research on using specially crafted inputs to slow down machine-learning neural network systems:

Sponge Examples: Energy-Latency Attacks on Neural Networks shows how to find adversarial examples that cause a DNN to burn more energy, take more time, or both. They affect a wide range of DNN applications, from image recognition to natural language processing (NLP). Adversaries might use these examples for all sorts of mischief — from draining mobile phone batteries, though degrading the machine-vision systems on which self-driving cars rely, to jamming cognitive radar.

So far, our most spectacular results are against NLP systems. By feeding them confusing inputs we can slow them down over 100 times. There are already examples in the real world where people pause or stumble when asked hard questions but we now have a dependable method for generating such examples automatically and at scale. We can also neutralize the performance improvements of accelerators for computer vision tasks, and make them operate on their worst case performance.

The paper.

Security Analysis of the Democracy Live Online Voting System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/security_analys_7.html

New research: “Security Analysis of the Democracy Live Online Voting System“:

Abstract: Democracy Live’s OmniBallot platform is a web-based system for blank ballot delivery, ballot marking, and (optionally) online voting. Three states — Delaware, West Virginia, and New Jersey — recently announced that they will allow certain voters to cast votes online using OmniBallot, but, despite the well established risks of Internet voting, the system has never been the subject of a public, independent security review.

We reverse engineered the client-side portion of OmniBallot, as used in Delaware, in order to detail the system’s operation and analyze its security.We find that OmniBallot uses a simplistic approach to Internet voting that is vulnerable to vote manipulation by malware on the voter’s device and by insiders or other attackers who can compromise Democracy Live, Amazon,Google, or Cloudflare. In addition, Democracy Live, which appears to have no privacy policy, receives sensitive personally identifiable information­ — including the voter’s identity, ballot selections, and browser fingerprint­ — that could be used to target political ads or disinformation campaigns.Even when OmniBallot is used to mark ballots that will be printed and returned in the mail, the software sends the voter’s identity and ballot choices to Democracy Live, an unnecessary security risk that jeopardizes the secret ballot. We recommend changes to make the platform safer for ballot delivery and marking. However, we conclude that using OmniBallot for electronic ballot return represents a severe risk to election security and could allow attackers to alter election results without detection.

News story.

EDITED TO ADD: This post has been translated into Portuguese.

New Research: "Privacy Threats in Intimate Relationships"

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/new_research_pr.html

I just published a new paper with Karen Levy of Cornell: “Privacy Threats in Intimate Relationships.”

Abstract: This article provides an overview of intimate threats: a class of privacy threats that can arise within our families, romantic partnerships, close friendships, and caregiving relationships. Many common assumptions about privacy are upended in the context of these relationships, and many otherwise effective protective measures fail when applied to intimate threats. Those closest to us know the answers to our secret questions, have access to our devices, and can exercise coercive power over us. We survey a range of intimate relationships and describe their common features. Based on these features, we explore implications for both technical privacy design and policy, and offer design recommendations for ameliorating intimate privacy risks.

This is an important issue that has gotten much too little attention in the cybersecurity community.

Password Changing After a Breach

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/password_changi.html

This study shows that most people don’t change their passwords after a breach, and if they do they change it to a weaker password.

Abstract: To protect against misuse of passwords compromised in a breach, consumers should promptly change affected passwords and any similar passwords on other accounts. Ideally, affected companies should strongly encourage this behavior and have mechanisms in place to mitigate harm. In order to make recommendations to companies about how to help their users perform these and other security-enhancing actions after breaches, we must first have some understanding of the current effectiveness of companies’ post-breach practices. To study the effectiveness of password-related breach notifications and practices enforced after a breach, we examine­ — based on real-world password data from 249 participants­ — whether and how constructively participants changed their passwords after a breach announcement.

Of the 249 participants, 63 had accounts on breached domains;only 33% of the 63 changed their passwords and only 13% (of 63)did so within three months of the announcement. New passwords were on average 1.3× stronger than old passwords (when comparing log10-transformed strength), though most were weaker or of equal strength. Concerningly, new passwords were overall more similar to participants’ other passwords, and participants rarely changed passwords on other sites even when these were the same or similar to their password on the breached domain.Our results highlight the need for more rigorous password-changing requirements following a breach and more effective breach notifications that deliver comprehensive advice.

News article.

EDITED TO ADD (6/2): Another news aricle. Slashdot thread.

Denmark, Sweden, Germany, the Netherlands and France SIGINT Alliance

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/05/denmark_sweden_.html

This paper describes a SIGINT and code-breaking alliance between Denmark, Sweden, Germany, the Netherlands and France called Maximator:

Abstract: This article is first to report on the secret European five-partner sigint alliance Maximator that started in the late 1970s. It discloses the name Maximator and provides documentary evidence. The five members of this European alliance are Denmark, Sweden, Germany, the Netherlands, and France. The cooperation involves both signals analysis and crypto analysis. The Maximator alliance has remained secret for almost fifty years, in contrast to its Anglo-Saxon Five-Eyes counterpart. The existence of this European sigint alliance gives a novel perspective on western sigint collaborations in the late twentieth century. The article explains and illustrates, with relatively much attention for the cryptographic details, how the five Maximator participants strengthened their effectiveness via the information about rigged cryptographic devices that its German partner provided, via the joint U.S.-German ownership and control of the Swiss producer Crypto AG of cryptographic devices.

Fooling NLP Systems Through Word Swapping

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/fooling_nlp_sys.html

MIT researchers have built a system that fools natural-language processing systems by swapping words with synonyms:

The software, developed by a team at MIT, looks for the words in a sentence that are most important to an NLP classifier and replaces them with a synonym that a human would find natural. For example, changing the sentence “The characters, cast in impossibly contrived situations, are totally estranged from reality” to “The characters, cast in impossibly engineered circumstances, are fully estranged from reality” makes no real difference to how we read it. But the tweaks made an AI interpret the sentences completely differently.

The results of this adversarial machine learning attack are impressive:

For example, Google’s powerful BERT neural net was worse by a factor of five to seven at identifying whether reviews on Yelp were positive or negative.

The paper:

Abstract: Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously crafted adversarial examples. In this paper, we present TextFooler, a simple but strong baseline to generate natural adversarial text. By applying it to two fundamental natural language tasks, text classification and textual entailment, we successfully attacked three target models, including the powerful pre-trained BERT, and the widely used convolutional and recurrent neural networks. We demonstrate the advantages of this framework in three ways: (1) effective — it outperforms state-of-the-art attacks in terms of success rate and perturbation rate, (2) utility-preserving — it preserves semantic content and grammaticality, and remains correctly classified by humans, and (3) efficient — it generates adversarial text with computational complexity linear to the text length.

Friday Squid Blogging: On Squid Communication

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/friday_squid_bl_723.html

They can communicate using bioluminescent flashes:

New research published this week in Proceedings of the National Academy of Sciences presents evidence for a previously unknown semantic-like ability in Humboldt squid. What’s more, these squid can enhance the visibility of their skin patterns by using their bodies as a kind of backlight, which may allow them to convey messages of surprising complexity, according to the new paper. Together, this could explain how Humboldt squid­ — and possibly other closely related squid­ — are able to facilitate group behaviors in light-restricted environments, such as evading predators, finding places to forage, signaling that it’s time to feed, and deciding who gets priority at the dinner table, among other things.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Hacking Voice Assistants with Ultrasonic Waves

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/hacking_voice_a_1.html

I previously wrote about hacking voice assistants with lasers. Turns you can do much the same thing with ultrasonic waves:

Voice assistants — the demo targeted Siri, Google Assistant, and Bixby — are designed to respond when they detect the owner’s voice after noticing a trigger phrase such as ‘Ok, Google’.

Ultimately, commands are just sound waves, which other researchers have already shown can be emulated using ultrasonic waves which humans can’t hear, providing an attacker has a line of sight on the device and the distance is short.

What SurfingAttack adds to this is the ability to send the ultrasonic commands through a solid glass or wood table on which the smartphone was sitting using a circular piezoelectric disc connected to its underside.

Although the distance was only 43cm (17 inches), hiding the disc under a surface represents a more plausible, easier-to-conceal attack method than previous techniques.

Research paper. Demonstration video.