Tag Archives: academicpapers

Privacy and Security of Data at Universities

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/privacy_and_sec.html

Interesting paper: “Open Data, Grey Data, and Stewardship: Universities at the Privacy Frontier,” by Christine Borgman:

Abstract: As universities recognize the inherent value in the data they collect and hold, they encounter unforeseen challenges in stewarding those data in ways that balance accountability, transparency, and protection of privacy, academic freedom, and intellectual property. Two parallel developments in academic data collection are converging: (1) open access requirements, whereby researchers must provide access to their data as a condition of obtaining grant funding or publishing results in journals; and (2) the vast accumulation of “grey data” about individuals in their daily activities of research, teaching, learning, services, and administration. The boundaries between research and grey data are blurring, making it more difficult to assess the risks and responsibilities associated with any data collection. Many sets of data, both research and grey, fall outside privacy regulations such as HIPAA, FERPA, and PII. Universities are exploiting these data for research, learning analytics, faculty evaluation, strategic decisions, and other sensitive matters. Commercial entities are besieging universities with requests for access to data or for partnerships to mine them. The privacy frontier facing research universities spans open access practices, uses and misuses of data, public records requests, cyber risk, and curating data for privacy protection. This Article explores the competing values inherent in data stewardship and makes recommendations for practice by drawing on the pioneering work of the University of California in privacy and information security, data governance, and cyber risk.

Security of Solid-State-Drive Encryption

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/security_of_sol.html

Interesting research: “Self-encrypting deception: weaknesses in the encryption of solid state drives (SSDs)“:

Abstract: We have analyzed the hardware full-disk encryption of several SSDs by reverse engineering their firmware. In theory, the security guarantees offered by hardware encryption are similar to or better than software implementations. In reality, we found that many hardware implementations have critical security weaknesses, for many models allowing for complete recovery of the data without knowledge of any secret. BitLocker, the encryption software built into Microsoft Windows will rely exclusively on hardware full-disk encryption if the SSD advertises supported for it. Thus, for these drives, data protected by BitLocker is also compromised. This challenges the view that hardware encryption is preferable over software encryption. We conclude that one should not rely solely on hardware encryption offered by SSDs.

EDITED TO ADD: The NSA is known to attack firmware of SSDs.

EDITED TO ADD (11/13): CERT advisory. And older research.

Detecting Deep Fakes

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/detecting_deep_.html

This story nicely illustrates the arms race between technologies to create fake videos and technologies to detect fake videos:

These fakes, while convincing if you watch a few seconds on a phone screen, aren’t perfect (yet). They contain tells, like creepily ever-open eyes, from flaws in their creation process. In looking into DeepFake’s guts, Lyu realized that the images that the program learned from didn’t include many with closed eyes (after all, you wouldn’t keep a selfie where you were blinking, would you?). “This becomes a bias,” he says. The neural network doesn’t get blinking. Programs also might miss other “physiological signals intrinsic to human beings,” says Lyu’s paper on the phenomenon, such as breathing at a normal rate, or having a pulse. (Autonomic signs of constant existential distress are not listed.) While this research focused specifically on videos created with this particular software, it is a truth universally acknowledged that even a large set of snapshots might not adequately capture the physical human experience, and so any software trained on those images may be found lacking.

Lyu’s blinking revelation revealed a lot of fakes. But a few weeks after his team put a draft of their paper online, they got anonymous emails with links to deeply faked YouTube videos whose stars opened and closed their eyes more normally. The fake content creators had evolved.

I don’t know who will win this arms race, if there ever will be a winner. But the problem with fake videos goes deeper: they affect people even if they are later told that they are fake, and there always will be people that will believe they are real, despite any evidence to the contrary.

China’s Hacking of the Border Gateway Protocol

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/chinas_hacking_.html

This is a long — and somewhat technical — paper by Chris C. Demchak and Yuval Shavitt about China’s repeated hacking of the Internet Border Gateway Protocol (BGP): “China’s Maxim ­ Leave No Access Point Unexploited: The Hidden Story of China Telecom’s BGP Hijacking.”

BGP hacking is how large intelligence agencies manipulate Internet routing to make certain traffic easier to intercept. The NSA calls it “network shaping” or “traffic shaping.” Here’s a document from the Snowden archives outlining how the technique works with Yemen.

EDITED TO ADD (10/27): BoingBoing post.

How DNA Databases Violate Everyone’s Privacy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/how_dna_databas.html

If you’re an American of European descent, there’s a 60% chance you can be uniquely identified by public information in DNA databases. This is not information that you have made public; this is information your relatives have made public.

Research paper:

“Identity inference of genomic data using long-range familial searches.”

Abstract: Consumer genomics databases have reached the scale of millions of individuals. Recently, law enforcement authorities have exploited some of these databases to identify suspects via distant familial relatives. Using genomic data of 1.28 million individuals tested with consumer genomics, we investigated the power of this technique. We project that about 60% of the searches for individuals of European-descent will result in a third cousin or closer match, which can allow their identification using demographic identifiers. Moreover, the technique could implicate nearly any US-individual of European-descent in the near future. We demonstrate that the technique can also identify research participants of a public sequencing project. Based on these results, we propose a potential mitigation strategy and policy implications to human subject research.

A good news article.

Detecting Credit Card Skimmers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/detecting_credi.html

Interesting research paper: “Fear the Reaper: Characterization and Fast Detection of Card Skimmers“:

Abstract: Payment card fraud results in billions of dollars in losses annually. Adversaries increasingly acquire card data using skimmers, which are attached to legitimate payment devices including point of sale terminals, gas pumps, and ATMs. Detecting such devices can be difficult, and while many experts offer advice in doing so, there exists no large-scale characterization of skimmer technology to support such defenses. In this paper, we perform the first such study based on skimmers recovered by the NYPD’s Financial Crimes Task Force over a 16 month period. After systematizing these devices, we develop the Skim Reaper, a detector which takes advantage of the physical properties and constraints necessary for many skimmers to steal card data. Our analysis shows the Skim Reaper effectively detects 100% of devices supplied by the NYPD. In so doing, we provide the first robust and portable mechanism for detecting card skimmers.

Boing Boing post.

Facebook Is Using Your Two-Factor Authentication Phone Number to Target Advertising

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/facebook_is_usi.html

From Kashmir Hill:

Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all, but that was collected from other people’s contact books, a hidden layer of details Facebook has about you that I’ve come to call “shadow contact information.” I managed to place an ad in front of Alan Mislove by targeting his shadow profile. This means that the junk email address that you hand over for discounts or for shady online shopping is likely associated with your account and being used to target you with ads.

Here’s the research paper. Hill again:

They found that when a user gives Facebook a phone number for two-factor authentication or in order to receive alerts about new log-ins to a user’s account, that phone number became targetable by an advertiser within a couple of weeks. So users who want their accounts to be more secure are forced to make a privacy trade-off and allow advertisers to more easily find them on the social network.

Evidence for the Security of PKCS #1 Digital Signatures

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/09/evidence_for_th.html

This is interesting research: “On the Security of the PKCS#1 v1.5 Signature Scheme“:

Abstract: The RSA PKCS#1 v1.5 signature algorithm is the most widely used digital signature scheme in practice. Its two main strengths are its extreme simplicity, which makes it very easy to implement, and that verification of signatures is significantly faster than for DSA or ECDSA. Despite the huge practical importance of RSA PKCS#1 v1.5 signatures, providing formal evidence for their security based on plausible cryptographic hardness assumptions has turned out to be very difficult. Therefore the most recent version of PKCS#1 (RFC 8017) even recommends a replacement the more complex and less efficient scheme RSA-PSS, as it is provably secure and therefore considered more robust. The main obstacle is that RSA PKCS#1 v1.5 signatures use a deterministic padding scheme, which makes standard proof techniques not applicable.

We introduce a new technique that enables the first security proof for RSA-PKCS#1 v1.5 signatures. We prove full existential unforgeability against adaptive chosen-message attacks (EUF-CMA) under the standard RSA assumption. Furthermore, we give a tight proof under the Phi-Hiding assumption. These proofs are in the random oracle model and the parameters deviate slightly from the standard use, because we require a larger output length of the hash function. However, we also show how RSA-PKCS#1 v1.5 signatures can be instantiated in practice such that our security proofs apply.

In order to draw a more complete picture of the precise security of RSA PKCS#1 v1.5 signatures, we also give security proofs in the standard model, but with respect to weaker attacker models (key-only attacks) and based on known complexity assumptions. The main conclusion of our work is that from a provable security perspective RSA PKCS#1 v1.5 can be safely used, if the output length of the hash function is chosen appropriately.

I don’t think the protocol is “provably secure,” meaning that it cannot have any vulnerabilities. What this paper demonstrates is that there are no vulnerabilities under the model of the proof. And, more importantly, that PKCS #1 v1.5 is as secure as any of its successors like RSA-PSS and RSA Full-Domain.

New Findings About Prime Number Distribution Almost Certainly Irrelevant to Cryptography

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/09/new_findings_ab.html

Lots of people are e-mailing me about this new result on the distribution of prime numbers. While interesting, it has nothing to do with cryptography. Cryptographers aren’t interested in how to find prime numbers, or even in the distribution of prime numbers. Public-key cryptography algorithms like RSA get their security from the difficulty of factoring large composite numbers that are the product of two prime numbers. That’s completely different.

John Mueller and Mark Stewart on the Risks of Terrorism

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/08/john_mueller_an.html

Another excellent paper by the Mueller/Stewart team: “Terrorism and Bathtubs: Comparing and Assessing the Risks“:

Abstract: The likelihood that anyone outside a war zone will be killed by an Islamist extremist terrorist is extremely small. In the United States, for example, some six people have perished each year since 9/11 at the hands of such terrorists — vastly smaller than the number of people who die in bathtub drownings. Some argue, however, that the incidence of terrorist destruction is low because counterterrorism measures are so effective. They also contend that terrorism may well become more frequent and destructive in the future as terrorists plot and plan and learn from experience, and that terrorism, unlike bathtubs, provides no benefit and exacts costs far beyond those in the event itself by damagingly sowing fear and anxiety and by requiring policy makers to adopt countermeasures that are costly and excessive. This paper finds these arguments to be wanting. In the process, it concludes that terrorism is rare outside war zones because, to a substantial degree, terrorists don’t exist there. In general, as with rare diseases that kill few, it makes more policy sense to expend limited funds on hazards that inflict far more damage. It also discusses the issue of risk communication for this hazard.

New Ways to Track Internet Browsing

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/08/new_ways_to_tra.html

Interesting research on web tracking: “Who Left Open the Cookie Jar? A Comprehensive Evaluation of Third-Party Cookie Policies:

Abstract: Nowadays, cookies are the most prominent mechanism to identify and authenticate users on the Internet. Although protected by the Same Origin Policy, popular browsers include cookies in all requests, even when these are cross-site. Unfortunately, these third-party cookies enable both cross-site attacks and third-party tracking. As a response to these nefarious consequences, various countermeasures have been developed in the form of browser extensions or even protection mechanisms that are built directly into the browser.

In this paper, we evaluate the effectiveness of these defense mechanisms by leveraging a framework that automatically evaluates the enforcement of the policies imposed to third-party requests. By applying our framework, which generates a comprehensive set of test cases covering various web mechanisms, we identify several flaws in the policy implementations of the 7 browsers and 46 browser extensions that were evaluated. We find that even built-in protection mechanisms can be circumvented by multiple novel techniques we discover. Based on these results, we argue that our proposed framework is a much-needed tool to detect bypasses and evaluate solutions to the exposed leaks. Finally, we analyze the origin of the identified bypass techniques, and find that these are due to a variety of implementation, configuration and design flaws.

The researchers discovered many new tracking techniques that work despite all existing anonymous browsing tools. These have not yet been seen in the wild, but that will change soon.

Three news articles. BoingBoing post.

Measuring the Rationality of Security Decisions

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/08/measuring_the_r.html

Interesting research: “Dancing Pigs or Externalities? Measuring the Rationality of
Security Decisions
“:

Abstract: Accurately modeling human decision-making in security is critical to thinking about when, why, and how to recommend that users adopt certain secure behaviors. In this work, we conduct behavioral economics experiments to model the rationality of end-user security decision-making in a realistic online experimental system simulating a bank account. We ask participants to make a financially impactful security choice, in the face of transparent risks of account compromise and benefits offered by an optional security behavior (two-factor authentication). We measure the cost and utility of adopting the security behavior via measurements of time spent executing the behavior and estimates of the participant’s wage. We find that more than 50% of our participants made rational (e.g., utility optimal) decisions, and we find that participants are more likely to behave rationally in the face of higher risk. Additionally, we find that users’ decisions can be modeled well as a function of past behavior (anchoring effects), knowledge of costs, and to a lesser extent, users’ awareness of risks and context (R2=0.61). We also find evidence of endowment effects, as seen in other areas of economic and psychological decision-science literature, in our digital-security setting. Finally, using our data, we show theoretically that a “one-size-fits-all” emphasis on security can lead to market losses, but that adoption by a subset of users with higher risks or lower costs can lead to market gains

Major Bluetooth Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/07/major_bluetooth.html

Bluetooth has a serious security vulnerability:

In some implementations, the elliptic curve parameters are not all validated by the cryptographic algorithm implementation, which may allow a remote attacker within wireless range to inject an invalid public key to determine the session key with high probability. Such an attacker can then passively intercept and decrypt all device messages, and/or forge and inject malicious messages.

Paper. Website. Three news articles.

This is serious. Update your software now, and try not to think about all of the Bluetooth applications that can’t be updated.

Recovering Keyboard Inputs through Thermal Imaging

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/07/recovering_keyb.html

Researchers at the University of California, Irvine, are able to recover user passwords by way of thermal imaging. The tech is pretty straightforward, but it’s interesting to think about the types of scenarios in which it might be pulled off.

Abstract: As a warm-blooded mammalian species, we humans routinely leave thermal residues on various objects with which we come in contact. This includes common input devices, such as keyboards, that are used for entering (among other things) secret information, such as passwords and PINs. Although thermal residue dissipates over time, there is always a certain time window during which thermal energy readings can be harvested from input devices to recover recently entered, and potentially sensitive, information.

To-date, there has been no systematic investigation of thermal profiles of keyboards, and thus no efforts have been made to secure them. This serves as our main motivation for constructing a means for password harvesting from keyboard thermal emanations. Specifically, we introduce Thermanator, a new post factum insider attack based on heat transfer caused by a user typing a password on a typical external keyboard. We conduct and describe a user study that collected thermal residues from 30 users entering 10 unique passwords (both weak and strong) on 4 popular commodity keyboards. Results show that entire sets of key-presses can be recovered by non-expert users as late as 30 seconds after initial password entry, while partial sets can be recovered as late as 1 minute after entry. Furthermore, we find that Hunt-and-Peck typists are particularly vulnerable. We also discuss some Thermanator mitigation strategies.

The main take-away of this work is three-fold: (1) using external keyboards to enter (already much-maligned) passwords is even less secure than previously recognized, (2) post factum (planned or impromptu) thermal imaging attacks are realistic, and finally (3) perhaps it is time to either stop using keyboards for password entry, or abandon passwords altogether.

News article.

Traffic Analysis of the LTE Mobile Standard

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/07/traffic_analysi.html

Interesting research in using traffic analysis to learn things about encrypted traffic. It’s hard to know how critical these vulnerabilities are. They’re very hard to close without wasting a huge amount of bandwidth.

The active attacks are more interesting.

EDITED TO ADD (7/3): More information.

I have been thinking about this, and now believe the attacks are more serious than I previously wrote.

Conservation of Threat

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/conservation_of.html

Here’s some interesting research about how we perceive threats. Basically, as the environment becomes safer we basically manufacture new threats. From an essay about the research:

To study how concepts change when they become less common, we brought volunteers into our laboratory and gave them a simple task ­– to look at a series of computer-generated faces and decide which ones seem “threatening.” The faces had been carefully designed by researchers to range from very intimidating to very harmless.

As we showed people fewer and fewer threatening faces over time, we found that they expanded their definition of “threatening” to include a wider range of faces. In other words, when they ran out of threatening faces to find, they started calling faces threatening that they used to call harmless. Rather than being a consistent category, what people considered “threats” depended on how many threats they had seen lately.

This has a lot of implications in security systems where humans have to make judgments about threat and risk: TSA agents, police noticing “suspicious” activities, “see something say something” campaigns, and so on.

The academic paper.