Tag Archives: threatmodels

Drone Denial-of-Service Attack against Gatwick Airport

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/drone_denial-of.html

Someone is flying a drone over Gatwick Airport in order to disrupt service:

Chris Woodroofe, Gatwick’s chief operating officer, said on Thursday afternoon there had been another drone sighting which meant it was impossible to say when the airport would reopen.

He told BBC News: “There are 110,000 passengers due to fly today, and the vast majority of those will see cancellations and disruption. We have had within the last hour another drone sighting so at this stage we are not open and I cannot tell you what time we will open.

“It was on the airport, seen by the police and corroborated. So having seen that drone that close to the runway it was unsafe to reopen.”

The economics of this kind of thing isn’t in our favor. A drone is cheap. Closing an airport for a day is very expensive.

I don’t think we’re going to solve this by jammers, or GPS-enabled drones that won’t fly over restricted areas. I’ve seen some technologies that will safely disable drones in flight, but I’m not optimistic about those in the near term. The best defense is probably punitive penalties for anyone doing something like this — enough to discourage others.

There are a lot of similar security situations, in which the cost to attack is vastly cheaper than 1) the damage caused by the attack, and 2) the cost to defend. I have long believed that this sort of thing represents an existential threat to our society.

EDITED TO ADD (12/23): The airport has deployed some ant-drone technology and reopened.

Conservation of Threat

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/conservation_of.html

Here’s some interesting research about how we perceive threats. Basically, as the environment becomes safer we basically manufacture new threats. From an essay about the research:

To study how concepts change when they become less common, we brought volunteers into our laboratory and gave them a simple task ­– to look at a series of computer-generated faces and decide which ones seem “threatening.” The faces had been carefully designed by researchers to range from very intimidating to very harmless.

As we showed people fewer and fewer threatening faces over time, we found that they expanded their definition of “threatening” to include a wider range of faces. In other words, when they ran out of threatening faces to find, they started calling faces threatening that they used to call harmless. Rather than being a consistent category, what people considered “threats” depended on how many threats they had seen lately.

This has a lot of implications in security systems where humans have to make judgments about threat and risk: TSA agents, police noticing “suspicious” activities, “see something say something” campaigns, and so on.

The academic paper.

The Digital Security Exchange Is Live

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/the_digital_sec.html

Last year I wrote about the Digital Security Exchange. The project is live:

The DSX works to strengthen the digital resilience of U.S. civil society groups by improving their understanding and mitigation of online threats.

We do this by pairing civil society and social sector organizations with credible and trustworthy digital security experts and trainers who can help them keep their data and networks safe from exposure, exploitation, and attack. We are committed to working with community-based organizations, legal and journalistic organizations, civil rights advocates, local and national organizers, and public and high-profile figures who are working to advance social, racial, political, and economic justice in our communities and our world.

If you are either an organization who needs help, or an expert who can provide help, visit their website.

Note: I am on their advisory committee.

Intimate Partner Threat

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/intimate_partne.html

Princeton’s Karen Levy has a good article computer security and the intimate partner threat:

When you learn that your privacy has been compromised, the common advice is to prevent additional access — delete your insecure account, open a new one, change your password. This advice is such standard protocol for personal security that it’s almost a no-brainer. But in abusive romantic relationships, disconnection can be extremely fraught. For one, it can put the victim at risk of physical harm: If abusers expect digital access and that access is suddenly closed off, it can lead them to become more violent or intrusive in other ways. It may seem cathartic to delete abusive material, like alarming text messages — but if you don’t preserve that kind of evidence, it can make prosecution more difficult. And closing some kinds of accounts, like social networks, to hide from a determined abuser can cut off social support that survivors desperately need. In some cases, maintaining a digital connection to the abuser may even be legally required (for instance, if the abuser and survivor share joint custody of children).

Threats from intimate partners also change the nature of what it means to be authenticated online. In most contexts, access credentials­ — like passwords and security questions — are intended to insulate your accounts against access from an adversary. But those mechanisms are often completely ineffective for security in intimate contexts: The abuser can compel disclosure of your password through threats of violence and has access to your devices because you’re in the same physical space. In many cases, the abuser might even own your phone — or might have access to your communications data because you share a family plan. Things like security questions are unlikely to be effective tools for protecting your security, because the abuser knows or can guess at intimate details about your life — where you were born, what your first job was, the name of your pet.

Bank Robbery Tactic

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/bank_robbery_ta.html

This video purports to be a bank robbery in Kiev. He first threatens a teller, who basically ignores him because she’s behind bullet-proof glass. But then the robber threatens one of her co-workers, who is on his side of the glass. Interesting example of a security system failing for an unexpected reason.

The video is weird, though. The robber seems very unsure of himself, and never really points the gun at anyone or even holds it properly.

Photocopier Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/01/photocopier_sec.html

A modern photocopier is basically a computer with a scanner and printer attached. This computer has a hard drive, and scans of images are regularly stored on that drive. This means that when a photocopier is thrown away, that hard drive is filled with pages that the machine copied over its lifetime. As you might expect, some of those pages will contain sensitive information.

This 2011 report was written by the Inspector General of the National Archives and Records Administration (NARA). It found that the organization did nothing to safeguard its photocopiers.

Our audit found that opportunities exist to strengthen controls to ensure photocopier hard drives are protected from potential exposure. Specifically, we found the following weaknesses.

  • NARA lacks appropriate controls to ensure all photocopiers across the agency are accounted for and that any hard drives residing on these machines are tracked and properly sanitized or destroyed prior to disposal.
  • There are no policies documenting security measures to be taken for photocopiers utilized for general use nor are there procedures to ensure photocopier hard drives are sanitized or destroyed prior to disposal or at the end of the lease term.

  • Photocopier lease agreements and contracts do not include a “keep disk”1 or similar clause as required by NARA’s IT Security Methodology for Media Protection Policy version 5.1.

I don’t mean to single this organization out. Pretty much no one thinks about this security threat.

Intellectual Property as National Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/intellectual_pr.html

Interesting research: Debora Halbert, “Intellectual property theft and national security: Agendas and assumptions“:

Abstract: About a decade ago, intellectual property started getting systematically treated as a national security threat to the United States. The scope of the threat is broadly conceived to include hacking, trade secret theft, file sharing, and even foreign students enrolling in American universities. In each case, the national security of the United States is claimed to be at risk, not just its economic competitiveness. This article traces the U.S. government’s efforts to establish and articulate intellectual property theft as a national security issue. It traces the discourse on intellectual property as a security threat and its place within the larger security dialogue of cyberwar and cybersecurity. It argues that the focus on the theft of intellectual property as a security issue helps justify enhanced surveillance and control over the Internet and its future development. Such a framing of intellectual property has consequences for how we understand information exchange on the Internet and for the future of U.S. diplomatic relations around the globe.

EDITED TO ADD (7/6): Preliminary version, no paywall.

The Unfalsifiability of Security Claims

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/05/the_unfalsifiab.html

Interesting research paper: Cormac Herley, “Unfalsifiability of security claims:

There is an inherent asymmetry in computer security: things can be declared insecure by observation, but not the reverse. There is no observation that allows us to declare an arbitrary system or technique secure. We show that this implies that claims of necessary conditions for security (and sufficient conditions for insecurity) are unfalsifiable. This in turn implies an asymmetry in self-correction: while the claim that countermeasures are sufficient is always subject to correction, the claim that they are necessary is not. Thus, the response to new information can only be to ratchet upward: newly observed or speculated attack capabilities can argue a countermeasure in, but no possible observation argues one out. Further, when justifications are unfalsifiable, deciding the relative importance of defensive measures reduces to a subjective comparison of assumptions. Relying on such claims is the source of two problems: once we go wrong we stay wrong and errors accumulate, and we have no systematic way to rank or prioritize measures.

This is both true and not true.

Mostly, it’s true. It’s true in cryptography, where we can never say that an algorithm is secure. We can either show how it’s insecure, or say something like: all of these smart people have spent lots of hours trying to break it, and they can’t — but we don’t know what a smarter person who spends even more hours analyzing it will come up with. It’s true in things like airport security, where we can easily point out insecurities but are unable to similarly demonstrate that some measures are unnecessary. And this does lead to a ratcheting up on security, in the absence of constraints like budget or processing speed. It’s easier to demand that everyone take off their shoes for special screening, or that we add another four rounds to the cipher, than to argue the reverse.

But it’s not entirely true. It’s difficult, but we can analyze the cost-effectiveness of different security measures. We can compare them with each other. We can make estimations and decisions and optimizations. It’s just not easy, and often it’s more of an art than a science. But all is not lost.

Still, a very good paper and one worth reading.