Tag Archives: cybersecurity

Hacking the Vote

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/hacking_the_vot.html

Russia has attacked the U.S. in cyberspace in an attempt to influence our national election, many experts have concluded. We need to take this national security threat seriously and both respond and defend, despite the partisan nature of this particular attack.

There is virtually no debate about that, either from the technical experts who analyzed the attack last month or the FBI which is analyzing it now. The hackers have already released DNC emails and voicemails, and promise more data dumps.

While their motivation remains unclear, they could continue to attack our election from now to November — and beyond.

Like everything else in society, elections have gone digital. And just as we’ve seen cyberattacks affecting all aspects of society, we’re going to see them affecting elections as well.

What happened to the DNC is an example of organizational doxing — the publishing of private information — an increasingly popular tactic against both government and private organizations. There are other ways to influence elections: denial-of-service attacks against candidate and party networks and websites, attacks against campaign workers and donors, attacks against voter rolls or election agencies, hacks of the candidate websites and social media accounts, and — the one that scares me the most — manipulation of our highly insecure but increasingly popular electronic voting machines.

On the one hand, this attack is a standard intelligence gathering operation, something the NSA does against political targets all over the world and other countries regularly do to us. The only thing different between this attack and the more common Chinese and Russian attacks against our government networks is that the Russians apparently decided to publish selected pieces of what they stole in an attempt to influence our election, and to use Wikileaks as a way to both hide their origin and give them a veneer of respectability.

All of the attacks listed above can be perpetrated by other countries and by individuals as well. They’ve been done in elections in other countries. They’ve been done in other contexts. The Internet broadly distributes power, and what was once the sole purview of nation states is now in the hands of the masses. We’re living in a world where disgruntled people with the right hacking skills can influence our elections, wherever they are in the world.

The Snowden documents have shown the world how aggressive our own intelligence agency is in cyberspace. But despite all of the policy analysis that has gone into our own national cybersecurity, we seem perpetually taken by surprise when we are attacked. While foreign interference in national elections isn’t new, and something the U.S. has repeatedly done, electronic interference is a different animal.

The Obama Administration is considering how to respond, but politics will get in the way. Were this an attack against a popular Internet company, or a piece of our physical infrastructure, we would all be together in response. But because these attacks affect one political party, the other party benefits. Even worse, the benefited candidate is actively inviting more foreign attacks against his opponent, though he now says he was just being sarcastic. Any response from the Administration or the FBI will be viewed through this partisan lens, especially because the President is a Democrat.

We need to rise above that. These threats are real and they affect us all, regardless of political affiliation. That this particular attack targeted the DNC is no indication of who the next attack might target. We need to make it clear to the world that we will not accept interference in our political process, whether by foreign countries or lone hackers.

However we respond to this act of aggression, we also need to increase the security of our election systems against all threats — and quickly.

We tend to underestimate threats that haven’t happened — we discount them as “theoretical” — and overestimate threats that have happened at least once. The terrorist attacks of 9/11 are a showcase example of that: Administration officials ignored all the warning signs, and then drastically overreacted after the fact. These Russian attacks against our voting system have happened. And they will happen again, unless we take action.

If a foreign country attacked U.S. critical infrastructure, we would respond as a nation against the threat. But if that attack falls along political lines, the response is more complicated. It shouldn’t be. This is a national security threat against our democracy, and needs to be treated as such.

This essay previously appeared on CNN.com.

Real-World Security and the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/real-world_secu.html

Disaster stories involving the Internet of Things are all the rage. They feature cars (both driven and driverless), the power grid, dams, and tunnel ventilation systems. A particularly vivid and realistic one, near-future fiction published last month in New York Magazine, described a cyberattack on New York that involved hacking of cars, the water system, hospitals, elevators, and the power grid. In these stories, thousands of people die. Chaos ensues. While some of these scenarios overhype the mass destruction, the individual risks are all real. And traditional computer and network security isn’t prepared to deal with them.

Classic information security is a triad: confidentiality, integrity, and availability. You’ll see it called “CIA,” which admittedly is confusing in the context of national security. But basically, the three things I can do with your data are steal it (confidentiality), modify it (integrity), or prevent you from getting it (availability).

So far, Internet threats have largely been about confidentiality. These can be expensive; one survey estimated that data breaches cost an average of $3.8 million each. They can be embarrassing, as in the theft of celebrity photos from Apple’s iCloud in 2014 or the Ashley Madison breach in 2015. They can be damaging, as when the government of North Korea stole tens of thousands of internal documents from Sony or when hackers stole data about 83 million customer accounts from JPMorgan Chase, both in 2014. They can even affect national security, as in the case of the Office of Personnel Management data breach by — presumptively — China in 2015.

On the Internet of Things, integrity and availability threats are much worse than confidentiality threats. It’s one thing if your smart door lock can be eavesdropped upon to know who is home. It’s another thing entirely if it can be hacked to allow a burglar to open the door — or prevent you from opening your door. A hacker who can deny you control of your car, or take over control, is much more dangerous than one who can eavesdrop on your conversations or track your car’s location.

With the advent of the Internet of Things and cyber-physical systems in general, we’ve given the Internet hands and feet: the ability to directly affect the physical world. What used to be attacks against data and information have become attacks against flesh, steel, and concrete.

Today’s threats include hackers crashing airplanes by hacking into computer networks, and remotely disabling cars, either when they’re turned off and parked or while they’re speeding down the highway. We’re worried about manipulated counts from electronic voting machines, frozen water pipes through hacked thermostats, and remote murder through hacked medical devices. The possibilities are pretty literally endless. The Internet of Things will allow for attacks we can’t even imagine.

The increased risks come from three things: software control of systems, interconnections between systems, and automatic or autonomous systems. Let’s look at them in turn:

Software Control. The Internet of Things is a result of everything turning into a computer. This gives us enormous power and flexibility, but it brings insecurities with it as well. As more things come under software control, they become vulnerable to all the attacks we’ve seen against computers. But because many of these things are both inexpensive and long-lasting, many of the patch and update systems that work with computers and smartphones won’t work. Right now, the only way to patch most home routers is to throw them away and buy new ones. And the security that comes from replacing your computer and phone every few years won’t work with your refrigerator and thermostat: on the average, you replace the former every 15 years, and the latter approximately never. A recent Princeton survey found 500,000 insecure devices on the Internet. That number is about to explode.

Interconnections. As these systems become interconnected, vulnerabilities in one lead to attacks against others. Already we’ve seen Gmail accounts compromised through vulnerabilities in Samsung smart refrigerators, hospital IT networks compromised through vulnerabilities in medical devices, and Target Corporation hacked through a vulnerability in its HVAC system. Systems are filled with externalities that affect other systems in unforeseen and potentially harmful ways. What might seem benign to the designers of a particular system becomes harmful when it’s combined with some other system. Vulnerabilities on one system cascade into other systems, and the result is a vulnerability that no one saw coming and no one bears responsibility for fixing. The Internet of Things will make exploitable vulnerabilities much more common. It’s simple mathematics. If 100 systems are all interacting with each other, that’s about 5,000 interactions and 5,000 potential vulnerabilities resulting from those interactions. If 300 systems are all interacting with each other, that’s 45,000 interactions. 1,000 systems: 12.5 million interactions. Most of them will be benign or uninteresting, but some of them will be very damaging.

Autonomy. Increasingly, our computer systems are autonomous. They buy and sell stocks, turn the furnace on and off, regulate electricity flow through the grid, and — in the case of driverless cars — automatically pilot multi-ton vehicles to their destinations. Autonomy is great for all sorts of reasons, but from a security perspective it means that the effects of attacks can take effect immediately, automatically, and ubiquitously. The more we remove humans from the loop, faster attacks can do their damage and the more we lose our ability to rely on actual smarts to notice something is wrong before it’s too late.

We’re building systems that are increasingly powerful, and increasingly useful. The necessary side effect is that they are increasingly dangerous. A single vulnerability forced Chrysler to recall 1.4 million vehicles in 2015. We’re used to computers being attacked at scale — think of the large-scale virus infections from the last decade — but we’re not prepared for this happening to everything else in our world.

Governments are taking notice. Last year, both Director of National Intelligence James Clapper and NSA Director Mike Rogers testified before Congress, warning of these threats. They both believe we’re vulnerable.

This is how it was phrased in the DNI’s 2015 Worldwide Threat Assessment: “Most of the public discussion regarding cyber threats has focused on the confidentiality and availability of information; cyber espionage undermines confidentiality, whereas denial-of-service operations and data-deletion attacks undermine availability. In the future, however, we might also see more cyber operations that will change or manipulate electronic information in order to compromise its integrity (i.e. accuracy and reliability) instead of deleting it or disrupting access to it. Decision-making by senior government officials (civilian and military), corporate executives, investors, or others will be impaired if they cannot trust the information they are receiving.”

The DNI 2016 threat assessment included something similar: “Future cyber operations will almost certainly include an increased emphasis on changing or manipulating data to compromise its integrity (i.e., accuracy and reliability) to affect decision making, reduce trust in systems, or cause adverse physical effects. Broader adoption of IoT devices and AI — in settings such as public utilities and healthcare — will only exacerbate these potential effects.”

Security engineers are working on technologies that can mitigate much of this risk, but many solutions won’t be deployed without government involvement. This is not something that the market can solve. Like data privacy, the risks and solutions are too technical for most people and organizations to understand; companies are motivated to hide the insecurity of their own systems from their customers, their users, and the public; the interconnections can make it impossible to connect data breaches with resultant harms; and the interests of the companies often don’t match the interests of the people.

Governments need to play a larger role: setting standards, policing compliance, and implementing solutions across companies and networks. And while the White House Cybersecurity National Action Plan says some of the right things, it doesn’t nearly go far enough, because so many of us are phobic of any government-led solution to anything.

The next president will probably be forced to deal with a large-scale Internet disaster that kills multiple people. I hope he or she responds with both the recognition of what government can do that industry can’t, and the political will to make it happen.

This essay previously appeared on Vice Motherboard.

BoingBoing post.

Report on the Vulnerabilities Equities Process

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/report_on_the_v.html

I have written before on the vulnerabilities equities process (VEP): the system by which the US government decides whether to disclose and fix a computer vulnerability or keep it secret and use it offensively. Ari Schwartz and Bob Knake, both former Directors for Cybersecurity Policy at the White House National Security Council, have written a report describing the process as we know it, with policy recommendations for improving it.

Basically, their recommendations are focused on improving the transparency, oversight, and accountability (three things I repeatedly recommend) of the process. In summary:

  • The President should issue an Executive Order mandating government-wide compliance with the VEP.
  • Make the general criteria used to decide whether or not to disclose a vulnerability public.
  • Clearly define the VEP.
  • Make sure any undisclosed vulnerabilities are reviewed periodically.
  • Ensure that the government has the right to disclose any vulnerabilities it purchases.
  • Transfer oversight of the VEP from the NSA to the DHS.
  • Issue an annual report on the VEP.
  • Expand Congressional oversight of the VEP.
  • Mandate oversight by other independent bodies inside the Executive Branch.
  • Expand funding for both offensive and defensive vulnerability research.

These all seem like good ideas to me. This is a complex issue, one I wrote about in Data and Goliath (pages 146-50), and one that’s only going to get more important in the Internet of Things.

News article.

Intellectual Property as National Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/intellectual_pr.html

Interesting research: Debora Halbert, “Intellectual property theft and national security: Agendas and assumptions“:

Abstract: About a decade ago, intellectual property started getting systematically treated as a national security threat to the United States. The scope of the threat is broadly conceived to include hacking, trade secret theft, file sharing, and even foreign students enrolling in American universities. In each case, the national security of the United States is claimed to be at risk, not just its economic competitiveness. This article traces the U.S. government’s efforts to establish and articulate intellectual property theft as a national security issue. It traces the discourse on intellectual property as a security threat and its place within the larger security dialogue of cyberwar and cybersecurity. It argues that the focus on the theft of intellectual property as a security issue helps justify enhanced surveillance and control over the Internet and its future development. Such a framing of intellectual property has consequences for how we understand information exchange on the Internet and for the future of U.S. diplomatic relations around the globe.

EDITED TO ADD (7/6): Preliminary version, no paywall.

More on the Going Dark Debate

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/05/more_on_the_goi.html

Lawfare is turning out to be the go-to blog for policy wonks about various government debates on cybersecurity. There are two good posts this week on the Going Dark debate.

The first is from those of us who wrote the “Keys Under Doormats” paper last year, criticizing the concept of backdoors and key escrow. We were responding to a half-baked proposal on how to give the government access without causing widespread insecurity, and we pointed out where almost of all of these sorts of proposals fall short:

1. Watch for systems that rely on a single powerful key or a small set of them.

2. Watch for systems using high-value keys over and over and still claiming not to increase risk.

3. Watch for the claim that the abstract algorithm alone is the measure of system security.

4. Watch for the assumption that scaling anything on the global Internet is easy.

5. Watch for the assumption that national borders are not a factor.

6. Watch for the assumption that human rights and the rule of law prevail throughout the world.

The second is by Susan Landau, and is a response to the ODNI’s response to the “Don’t Panic” report. Our original report said basically that the FBI wasn’t going dark and that surveillance information is everywhere. At a Senate hearing, Sen. Wyden requested that the Office of the Director of National Intelligence respond to the report. It did — not very well, honestly — and Landau responded to that response. She pointed out that there really wasn’t much disagreement: that the points it claimed to have issue with were actually points we made and agreed with.

In the end, the ODNI’s response to our report leaves me somewhat confused. The reality is that the only strong disagreement seems to be with an exaggerated view of one finding. It almost appears as if ODNI is using the Harvard report as an opportunity to say, “Widespread use of encryption will make our work life more difficult.” Of course it will. Widespread use of encryption will also help prevent some of the cybersecurity exploits and attacks we have been experiencing over the last decade. The ODNI letter ignored that issue.

Captain America Civil War — it’s us

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/03/captain-america-civil-war-its-us.html

The next Marvel movie is Captain America: Civil War (May 2, 2016). The plot is this: after the Avengers keep blowing things up, there is pushback demanding accountability. Government should be in control when to call in the Avengers, and superhumans should be forced to register with the government. Ironman is pro-accountability, as you’ve seen his story arc evolve toward this point in the movies. Captain America is anti-accountability.This story arc is us, in cybersecurity. Last year, Charlie Miller and Chris Valasek proved they could, through the “Internet”, remotely hack in and control a car driving down the freeway. In the video, we see a frightened reporter as the engine stalls in freeway traffic. Should researchers be able to probe cars, medical equipment, and IoT devices accountable to nobody but themselves? Or should they be accountable to the public, and rules setup by government?This story is about us personally, too. In cyberspace, many of us have superhuman powers. Should we be free to do whatever we want, without accountability, or should be be forced to register with teh government, so they can watch us? For example, I scan the Internet (the entire Internet) with relative impunity. This is what I tweeted when creating my masscan tool, an apt analogy:I’ve been totally tonystartking the code for the past week (think Iron man, working in the basement, only with software code).— Rob Graham ❄️ (@ErrataRob) March 26, 2013Finally, this is related to the #FBIvApple debate on crypto backdoors. Should law-enforcement be able to get into all our electronics, when they have a warrant upon probably cause? Or should citizens be able to encrypt their data with impunity, so that nobody (not even the NSA codebreakers) can read it?I’m totally #TeamCap on this one, as most of you know. It’s car companies and medical device manufacturers who should be held accountable for deffects. They evade responsibility because they can pay for government lobbyists. Only a free security research community will ever hold them accountable. Similarly, as Snowden showed, ‘warrents’ are not enough to hold the government and law enforcement accountable, and thus, unfettered crypto must be a right of the people that government cannot abridge. Lastly, I’ll never “register” or “get certified” by the government. I’ll leave the country before that happens.

Captain America Civil War — it’s us

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/03/captain-america-civil-war-its-us.html

The next Marvel movie is Captain America: Civil War (May 2, 2016). The plot is this: after the Avengers keep blowing things up, there is pushback demanding accountability. Government should be in control when to call in the Avengers, and superhumans should be forced to register with the government. Ironman is pro-accountability, as you’ve seen his story arc evolve toward this point in the movies. Captain America is anti-accountability.This story arc is us, in cybersecurity. Last year, Charlie Miller and Chris Valasek proved they could, through the “Internet”, remotely hack in and control a car driving down the freeway. In the video, we see a frightened reporter as the engine stalls in freeway traffic. Should researchers be able to probe cars, medical equipment, and IoT devices accountable to nobody but themselves? Or should they be accountable to the public, and rules setup by government?This story is about us personally, too. In cyberspace, many of us have superhuman powers. Should we be free to do whatever we want, without accountability, or should be be forced to register with teh government, so they can watch us? For example, I scan the Internet (the entire Internet) with relative impunity. This is what I tweeted when creating my masscan tool, an apt analogy:I’ve been totally tonystartking the code for the past week (think Iron man, working in the basement, only with software code).— Rob Graham ❄️ (@ErrataRob) March 26, 2013Finally, this is related to the #FBIvApple debate on crypto backdoors. Should law-enforcement be able to get into all our electronics, when they have a warrant upon probably cause? Or should citizens be able to encrypt their data with impunity, so that nobody (not even the NSA codebreakers) can read it?I’m totally #TeamCap on this one, as most of you know. It’s car companies and medical device manufacturers who should be held accountable for deffects. They evade responsibility because they can pay for government lobbyists. Only a free security research community will ever hold them accountable. Similarly, as Snowden showed, ‘warrents’ are not enough to hold the government and law enforcement accountable, and thus, unfettered crypto must be a right of the people that government cannot abridge. Lastly, I’ll never “register” or “get certified” by the government. I’ll leave the country before that happens.