All posts by Bruce Schneier

Frequent Password Changes Is a Bad Security Idea

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/frequent_passwo.html

I’ve been saying for years that it’s bad security advice, that it encourages poor passwords. Lorrie Cranor, now the FTC’s chief technologist, agrees:

By studying the data, the researchers identified common techniques account holders used when they were required to change passwords. A password like “tarheels#1”, for instance (excluding the quotation marks) frequently became “tArheels#1” after the first change, “taRheels#1” on the second change and so on. Or it might be changed to “tarheels#11” on the first change and “tarheels#111” on the second. Another common technique was to substitute a digit to make it “tarheels#2”, “tarheels#3”, and so on.

“The UNC researchers said if people have to change their passwords every 90 days, they tend to use a pattern and they do what we call a transformation,” Cranor explained. “They take their old passwords, they change it in some small way, and they come up with a new password.”

The researchers used the transformations they uncovered to develop algorithms that were able to predict changes with great accuracy. Then they simulated real-world cracking to see how well they performed. In online attacks, in which attackers try to make as many guesses as possible before the targeted network locks them out, the algorithm cracked 17 percent of the accounts in fewer than five attempts. In offline attacks performed on the recovered hashes using superfast computers, 41 percent of the changed passwords were cracked within three seconds.

That data refers to this study.

My advice for choosing a secure password is here.

More on the Vulnerabilities Equities Process

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/more_on_the_vul.html

The Open Technology Institute of the New America Foundation has released a policy paper on the vulnerabilities equities process: “Bugs in the System: A Primer on the Software Vulnerability Ecosystem and its Policy Implications.”

Their policy recommendations:

  • Minimize participation in the vulnerability black market.
  • Establish strong, clear procedures for disclosure when it discovers and acquires vulnerability.
  • Establish rules for government hacking.
  • Support bug bounty programs.
  • Reform the DMCA and CFAA so they encourage responsible vulnerability disclosure.

It’s a good document, and worth reading.

NIST is No Longer Recommending Two-Factor Authentication Using SMS

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/nist_is_no_long.html

NIST is no longer recommending two-factor authentication systems that use SMS, because of their many insecurities. In the latest draft of its Digital Authentication Guideline, there’s the line:

[Out of band verification] using SMS is deprecated, and will no longer be allowed in future releases of this guidance.

New Presidential Directive on Incident Response

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/new_presidentia.html

Last week, President Obama issued a policy directive (PPD-41) on cyber-incident response coordination. The FBI is in charge, which is no surprise. Actually, there’s not much surprising in the document. I suppose it’s important to formalize this stuff, but I think it’s what happens now.

News article. Brief analysis. The FBI’s perspective.

Security Vulnerabilities in Wireless Keyboards

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/security_vulner_7.html

Most of them are unencrypted, which makes them vulnerable to all sorts of attacks:

On Tuesday Bastille’s research team revealed a new set of wireless keyboard attacks they’re calling Keysniffer. The technique, which they’re planning to detail at the Defcon hacker conference in two weeks, allows any hacker with a $12 radio device to intercept the connection between any of eight wireless keyboards and a computer from 250 feet away. What’s more, it gives the hacker the ability to both type keystrokes on the victim machine and silently record the target’s typing.

This is a continuation of their previous work

More news articles. Here are lists of affected devices.

Hacking the Vote

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/hacking_the_vot.html

Russia has attacked the U.S. in cyberspace in an attempt to influence our national election, many experts have concluded. We need to take this national security threat seriously and both respond and defend, despite the partisan nature of this particular attack.

There is virtually no debate about that, either from the technical experts who analyzed the attack last month or the FBI which is analyzing it now. The hackers have already released DNC emails and voicemails, and promise more data dumps.

While their motivation remains unclear, they could continue to attack our election from now to November — and beyond.

Like everything else in society, elections have gone digital. And just as we’ve seen cyberattacks affecting all aspects of society, we’re going to see them affecting elections as well.

What happened to the DNC is an example of organizational doxing — the publishing of private information — an increasingly popular tactic against both government and private organizations. There are other ways to influence elections: denial-of-service attacks against candidate and party networks and websites, attacks against campaign workers and donors, attacks against voter rolls or election agencies, hacks of the candidate websites and social media accounts, and — the one that scares me the most — manipulation of our highly insecure but increasingly popular electronic voting machines.

On the one hand, this attack is a standard intelligence gathering operation, something the NSA does against political targets all over the world and other countries regularly do to us. The only thing different between this attack and the more common Chinese and Russian attacks against our government networks is that the Russians apparently decided to publish selected pieces of what they stole in an attempt to influence our election, and to use Wikileaks as a way to both hide their origin and give them a veneer of respectability.

All of the attacks listed above can be perpetrated by other countries and by individuals as well. They’ve been done in elections in other countries. They’ve been done in other contexts. The Internet broadly distributes power, and what was once the sole purview of nation states is now in the hands of the masses. We’re living in a world where disgruntled people with the right hacking skills can influence our elections, wherever they are in the world.

The Snowden documents have shown the world how aggressive our own intelligence agency is in cyberspace. But despite all of the policy analysis that has gone into our own national cybersecurity, we seem perpetually taken by surprise when we are attacked. While foreign interference in national elections isn’t new, and something the U.S. has repeatedly done, electronic interference is a different animal.

The Obama Administration is considering how to respond, but politics will get in the way. Were this an attack against a popular Internet company, or a piece of our physical infrastructure, we would all be together in response. But because these attacks affect one political party, the other party benefits. Even worse, the benefited candidate is actively inviting more foreign attacks against his opponent, though he now says he was just being sarcastic. Any response from the Administration or the FBI will be viewed through this partisan lens, especially because the President is a Democrat.

We need to rise above that. These threats are real and they affect us all, regardless of political affiliation. That this particular attack targeted the DNC is no indication of who the next attack might target. We need to make it clear to the world that we will not accept interference in our political process, whether by foreign countries or lone hackers.

However we respond to this act of aggression, we also need to increase the security of our election systems against all threats — and quickly.

We tend to underestimate threats that haven’t happened — we discount them as “theoretical” — and overestimate threats that have happened at least once. The terrorist attacks of 9/11 are a showcase example of that: Administration officials ignored all the warning signs, and then drastically overreacted after the fact. These Russian attacks against our voting system have happened. And they will happen again, unless we take action.

If a foreign country attacked U.S. critical infrastructure, we would respond as a nation against the threat. But if that attack falls along political lines, the response is more complicated. It shouldn’t be. This is a national security threat against our democracy, and needs to be treated as such.

This essay previously appeared on CNN.com.

How Altruism Might Have Evolved

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/how_altruism_mi.html

I spend a lot of time in my book Liars and Outliers on cooperating versus defecting. Cooperating is good for the group at the expense of the individual. Defecting is good for the individual at the expense of the group. Given that evolution concerns individuals, there has been a lot of controversy over how altruism might have evolved.

Here’s one possible answer: it’s favored by chance:

The key insight is that the total size of population that can be supported depends on the proportion of cooperators: more cooperation means more food for all and a larger population. If, due to chance, there is a random increase in the number of cheats then there is not enough food to go around and total population size will decrease. Conversely, a random decrease in the number of cheats will allow the population to grow to a larger size, disproportionally benefitting the cooperators. In this way, the cooperators are favoured by chance, and are more likely to win in the long term.

Dr George Constable, soon to join the University of Bath from Princeton, uses the analogy of flipping a coin, where heads wins £20 but tails loses £10:

“Although the odds [of] winning or losing are the same, winning is more good than losing is bad. Random fluctuations in cheat numbers are exploited by the cooperators, who benefit more than they lose out.”

The Security of Our Election Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/the_security_of_11.html

Russia was behind the hacks into the Democratic National Committee’s computer network that led to the release of thousands of internal emails just before the party’s convention began, U.S. intelligence agencies have reportedly concluded.

The FBI is investigating. WikiLeaks promises there is more data to come. The political nature of this cyberattack means that Democrats and Republicans are trying to spin this as much as possible. Even so, we have to accept that someone is attacking our nation’s computer systems in an apparent attempt to influence a presidential election. This kind of cyberattack targets the very core of our democratic process. And it points to the possibility of an even worse problem in November ­ that our election systems and our voting machines could be vulnerable to a similar attack.

If the intelligence community has indeed ascertained that Russia is to blame, our government needs to decide what to do in response. This is difficult because the attacks are politically partisan, but it is essential. If foreign governments learn that they can influence our elections with impunity, this opens the door for future manipulations, both document thefts and dumps like this one that we see and more subtle manipulations that we don’t see.

Retaliation is politically fraught and could have serious consequences, but this is an attack against our democracy. We need to confront Russian President Vladimir Putin in some way ­ politically, economically or in cyberspace ­ and make it clear that we will not tolerate this kind of interference by any government. Regardless of your political leanings this time, there’s no guarantee the next country that tries to manipulate our elections will share your preferred candidates.

Even more important, we need to secure our election systems before autumn. If Putin’s government has already used a cyberattack to attempt to help Trump win, there’s no reason to believe he won’t do it again ­ especially now that Trump is inviting the “help.”

Over the years, more and more states have moved to electronic voting machines and have flirted with Internet voting. These systems are insecure and vulnerable to attack.

But while computer security experts like me have sounded the alarm for many years, states have largely ignored the threat, and the machine manufacturers have thrown up enough obfuscating babble that election officials are largely mollified.

We no longer have time for that. We must ignore the machine manufacturers’ spurious claims of security, create tiger teams to test the machines’ and systems’ resistance to attack, drastically increase their cyber-defenses and take them offline if we can’t guarantee their security online.

Longer term, we need to return to election systems that are secure from manipulation. This means voting machines with voter-verified paper audit trails, and no Internet voting. I know it’s slower and less convenient to stick to the old-fashioned way, but the security risks are simply too great.

There are other ways to attack our election system on the Internet besides hacking voting machines or changing vote tallies: deleting voter records, hijacking candidate or party websites, targeting and intimidating campaign workers or donors. There have already been multiple instances of political doxing ­ publishing personal information and documents about a person or organization ­ and we could easily see more of it in this election cycle. We need to take these risks much more seriously than before.

Government interference with foreign elections isn’t new, and in fact, that’s something the United States itself has repeatedly done in recent history. Using cyberattacks to influence elections is newer but has been done before, too ­ most notably in Latin America. Hacking of voting machines isn’t new, either. But what is new is a foreign government interfering with a U.S. national election on a large scale. Our democracy cannot tolerate it, and we as citizens cannot accept it.

Last April, the Obama administration issued an executive order outlining how we as a nation respond to cyberattacks against our critical infrastructure. While our election technology was not explicitly mentioned, our political process is certainly critical. And while they’re a hodgepodge of separate state-run systems, together their security affects every one of us. After everyone has voted, it is essential that both sides believe the election was fair and the results accurate. Otherwise, the election has no legitimacy.

Election security is now a national security issue; federal officials need to take the lead, and they need to do it quickly.

This essay originally appeared in the Washington Post.

Real-World Security and the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/real-world_secu.html

Disaster stories involving the Internet of Things are all the rage. They feature cars (both driven and driverless), the power grid, dams, and tunnel ventilation systems. A particularly vivid and realistic one, near-future fiction published last month in New York Magazine, described a cyberattack on New York that involved hacking of cars, the water system, hospitals, elevators, and the power grid. In these stories, thousands of people die. Chaos ensues. While some of these scenarios overhype the mass destruction, the individual risks are all real. And traditional computer and network security isn’t prepared to deal with them.

Classic information security is a triad: confidentiality, integrity, and availability. You’ll see it called “CIA,” which admittedly is confusing in the context of national security. But basically, the three things I can do with your data are steal it (confidentiality), modify it (integrity), or prevent you from getting it (availability).

So far, Internet threats have largely been about confidentiality. These can be expensive; one survey estimated that data breaches cost an average of $3.8 million each. They can be embarrassing, as in the theft of celebrity photos from Apple’s iCloud in 2014 or the Ashley Madison breach in 2015. They can be damaging, as when the government of North Korea stole tens of thousands of internal documents from Sony or when hackers stole data about 83 million customer accounts from JPMorgan Chase, both in 2014. They can even affect national security, as in the case of the Office of Personnel Management data breach by — presumptively — China in 2015.

On the Internet of Things, integrity and availability threats are much worse than confidentiality threats. It’s one thing if your smart door lock can be eavesdropped upon to know who is home. It’s another thing entirely if it can be hacked to allow a burglar to open the door — or prevent you from opening your door. A hacker who can deny you control of your car, or take over control, is much more dangerous than one who can eavesdrop on your conversations or track your car’s location.

With the advent of the Internet of Things and cyber-physical systems in general, we’ve given the Internet hands and feet: the ability to directly affect the physical world. What used to be attacks against data and information have become attacks against flesh, steel, and concrete.

Today’s threats include hackers crashing airplanes by hacking into computer networks, and remotely disabling cars, either when they’re turned off and parked or while they’re speeding down the highway. We’re worried about manipulated counts from electronic voting machines, frozen water pipes through hacked thermostats, and remote murder through hacked medical devices. The possibilities are pretty literally endless. The Internet of Things will allow for attacks we can’t even imagine.

The increased risks come from three things: software control of systems, interconnections between systems, and automatic or autonomous systems. Let’s look at them in turn:

Software Control. The Internet of Things is a result of everything turning into a computer. This gives us enormous power and flexibility, but it brings insecurities with it as well. As more things come under software control, they become vulnerable to all the attacks we’ve seen against computers. But because many of these things are both inexpensive and long-lasting, many of the patch and update systems that work with computers and smartphones won’t work. Right now, the only way to patch most home routers is to throw them away and buy new ones. And the security that comes from replacing your computer and phone every few years won’t work with your refrigerator and thermostat: on the average, you replace the former every 15 years, and the latter approximately never. A recent Princeton survey found 500,000 insecure devices on the Internet. That number is about to explode.

Interconnections. As these systems become interconnected, vulnerabilities in one lead to attacks against others. Already we’ve seen Gmail accounts compromised through vulnerabilities in Samsung smart refrigerators, hospital IT networks compromised through vulnerabilities in medical devices, and Target Corporation hacked through a vulnerability in its HVAC system. Systems are filled with externalities that affect other systems in unforeseen and potentially harmful ways. What might seem benign to the designers of a particular system becomes harmful when it’s combined with some other system. Vulnerabilities on one system cascade into other systems, and the result is a vulnerability that no one saw coming and no one bears responsibility for fixing. The Internet of Things will make exploitable vulnerabilities much more common. It’s simple mathematics. If 100 systems are all interacting with each other, that’s about 5,000 interactions and 5,000 potential vulnerabilities resulting from those interactions. If 300 systems are all interacting with each other, that’s 45,000 interactions. 1,000 systems: 12.5 million interactions. Most of them will be benign or uninteresting, but some of them will be very damaging.

Autonomy. Increasingly, our computer systems are autonomous. They buy and sell stocks, turn the furnace on and off, regulate electricity flow through the grid, and — in the case of driverless cars — automatically pilot multi-ton vehicles to their destinations. Autonomy is great for all sorts of reasons, but from a security perspective it means that the effects of attacks can take effect immediately, automatically, and ubiquitously. The more we remove humans from the loop, faster attacks can do their damage and the more we lose our ability to rely on actual smarts to notice something is wrong before it’s too late.

We’re building systems that are increasingly powerful, and increasingly useful. The necessary side effect is that they are increasingly dangerous. A single vulnerability forced Chrysler to recall 1.4 million vehicles in 2015. We’re used to computers being attacked at scale — think of the large-scale virus infections from the last decade — but we’re not prepared for this happening to everything else in our world.

Governments are taking notice. Last year, both Director of National Intelligence James Clapper and NSA Director Mike Rogers testified before Congress, warning of these threats. They both believe we’re vulnerable.

This is how it was phrased in the DNI’s 2015 Worldwide Threat Assessment: “Most of the public discussion regarding cyber threats has focused on the confidentiality and availability of information; cyber espionage undermines confidentiality, whereas denial-of-service operations and data-deletion attacks undermine availability. In the future, however, we might also see more cyber operations that will change or manipulate electronic information in order to compromise its integrity (i.e. accuracy and reliability) instead of deleting it or disrupting access to it. Decision-making by senior government officials (civilian and military), corporate executives, investors, or others will be impaired if they cannot trust the information they are receiving.”

The DNI 2016 threat assessment included something similar: “Future cyber operations will almost certainly include an increased emphasis on changing or manipulating data to compromise its integrity (i.e., accuracy and reliability) to affect decision making, reduce trust in systems, or cause adverse physical effects. Broader adoption of IoT devices and AI — in settings such as public utilities and healthcare — will only exacerbate these potential effects.”

Security engineers are working on technologies that can mitigate much of this risk, but many solutions won’t be deployed without government involvement. This is not something that the market can solve. Like data privacy, the risks and solutions are too technical for most people and organizations to understand; companies are motivated to hide the insecurity of their own systems from their customers, their users, and the public; the interconnections can make it impossible to connect data breaches with resultant harms; and the interests of the companies often don’t match the interests of the people.

Governments need to play a larger role: setting standards, policing compliance, and implementing solutions across companies and networks. And while the White House Cybersecurity National Action Plan says some of the right things, it doesn’t nearly go far enough, because so many of us are phobic of any government-led solution to anything.

The next president will probably be forced to deal with a large-scale Internet disaster that kills multiple people. I hope he or she responds with both the recognition of what government can do that industry can’t, and the political will to make it happen.

This essay previously appeared on Vice Motherboard.

BoingBoing post.

Detecting When a Smartphone Has Been Compromised

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/detecting_when_.html

Andrew “bunnie” Huang and Edward Snowden have designed a smartphone case that detects unauthorized transmissions by the phone. Paper. Three news articles.

Looks like a clever design. Of course, it has to be outside the device; otherwise, it could be compromised along with the device. Note that this is still in the research design stage; there are no public prototypes.

The NSA and "Intelligence Legalism"

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/the_nsa_and_int.html

Interesting law journal paper: “Intelligence Legalism and the National Security Agency’s Civil Liberties Gap,” by Margo Schlanger:

Abstract: This paper examines the National Security Agency, its compliance with legal constraints and its respect for civil liberties. But even if perfect compliance could be achieved, it is too paltry a goal. A good oversight system needs its institutions not just to support and enforce compliance but also to design good rules. Yet as will become evident, the offices that make up the NSA’s compliance system are nearly entirely compliance offices, not policy offices; they work to improve compliance with existing rules, but not to consider the pros and cons of more individually-protective rules and try to increase privacy or civil liberties where the cost of doing so is acceptable. The NSA and the administration in which it sits have thought of civil liberties and privacy only in compliance terms. That is, they have asked only “Can we (legally) do X?” and not “Should we do X?” This preference for the can question over the should question is part and parcel, I argue, of a phenomenon I label “intelligence legalism,” whose three crucial and simultaneous features are imposition of substantive rules given the status of law rather than policy; some limited court enforcement of those rules; and empowerment of lawyers. Intelligence legalism has been a useful corrective to the lawlessness that characterized surveillance prior to intelligence reform, in the late 1970s. But I argue that it gives systematically insufficient weight to individual liberty, and that its relentless focus on rights, and compliance, and law has obscured the absence of what should be an additional focus on interests, or balancing, or policy. More is needed; additional attention should be directed both within the NSA and by its overseers to surveillance policy, weighing the security gains from surveillance against the privacy and civil liberties risks and costs. That attention will not be a panacea, but it can play a useful role in filling the civil liberties gap intelligence legalism creates.

This is similar to what I wrote in Data and Goliath:

There are two levels of oversight. The first is strategic: are the rules we’re imposing the correct ones? For example, the NSA can implement its own procedures to ensure that it’s following the rules, but it should not get to decide what rules it should follow….

The other kind of oversight is tactical: are the rules being followed? Mechanisms for this kind of oversight include procedures, audits, approvals, troubleshooting protocols, and so on. The NSA, for example, trains its analysts in the regulations governing their work, audits systems to ensure that those regulations are actually followed, and has instituted reporting and disciplinary procedures for occasions when they’re not.

It’s not enough that the NSA makes sure there is a plausible legal interpretation that authorizes what they do. We need to make sure that their understanding of the law is shared with the outside world, and that what they’re doing is a good idea.

EDITED TO ADD: The paper is from 2014. Also worth reading are these two related essays.

Russian Hack of the DNC

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/russian_hack_of.html

Amazingly enough, the preponderance of the evidence points to Russia as the source of the DNC leak. I was going to summarize the evidence, but Thomas Rid did a great job here. Much of that is based on June’s forensic analysis by Crowdstrike, which I wrote about here. More analysis here.

Jack Goldsmith discusses the political implications.

The FBI is investigating. It’s not unreasonable to expect the NSA has some additional intelligence on this attack, similarly to what they had on the North Korea attack on Sony.

EDITED TO ADD (7/27): More on the FBI’s investigation. Another summary of the evidence pointing to Russia.

Tracking the Owner of Kickass Torrents

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/tracking_the_ow.html

Here’s the story of how it was done. First, a fake ad on torrent listings linked the site to a Latvian bank account, an e-mail address, and a Facebook page.

Using basic website-tracking services, Der-Yeghiayan was able to uncover (via a reverse DNS search) the hosts of seven apparent KAT website domains: kickasstorrents.com, kat.cr, kickass.to, kat.ph, kastatic.com, thekat.tv and kickass.cr. This dug up two Chicago IP addresses, which were used as KAT name servers for more than four years. Agents were then able to legally gain a copy of the server’s access logs (explaining why it was federal authorities in Chicago that eventually charged Vaulin with his alleged crimes).

Using similar tools, Homeland Security investigators also performed something called a WHOIS lookup on a domain that redirected people to the main KAT site. A WHOIS search can provide the name, address, email and phone number of a website registrant. In the case of kickasstorrents.biz, that was Artem Vaulin from Kharkiv, Ukraine.

Der-Yeghiayan was able to link the email address found in the WHOIS lookup to an Apple email address that Vaulin purportedly used to operate KAT. It’s this Apple account that appears to tie all of pieces of Vaulin’s alleged involvement together.

On July 31st 2015, records provided by Apple show that the me.com account was used to purchase something on iTunes. The logs show that the same IP address was used on the same day to access the KAT Facebook page. After KAT began accepting Bitcoin donations in 2012, $72,767 was moved into a Coinbase account in Vaulin’s name. That Bitcoin wallet was registered with the same me.com email address.

Another article.

The Economist on Hacking the Financial System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/the_economist_o_5.html

The Economist has an article on the potential hacking of the global financial system, both for profit or to cause mayhem. It’s reasonably balanced.

So how might such an attack unfold? Step one, several months before mayhem is unleashed, is to get into the system. Financial institutions have endless virtual doors that could be used to trespass, but one of the easiest to force is still the front door. By getting someone who works at an FMI or a partner company to click on a corrupt link through a “phishing” attack (an attempt to get hold of sensitive information by masquerading as someone trustworthy), or stealing their credentials when they use public Wi-Fi, hackers can impersonate them and install malware to watch over employees’ shoulders and see how the institution’s system functions. This happened in the Carbanak case: hackers installed a “RAT” (remote-access tool) to make videos of employees’ computers.

Step two is to study the system and set up booby traps. Once in, the gang quietly observes the quirks and defences of the system in order to plan the perfect attack from within; hackers have been known to sit like this for years. Provided they are not detected, they pick their places to plant spyware or malware that can be activated at the click of a button.

Step three is the launch. One day, preferably when there is already distracting market turmoil, they unleash a series of attacks on, say, multiple clearing houses.

The attackers might start with small changes, tweaking numbers in transactions as they are processed (Bank A gets credited $1,000, for example, but on the other side of the transaction Bank B is debited $0, or $900 or $100,000). As lots of erroneous payments travel the globe, and as it becomes clear that these are not just “glitches”, eventually the entire system would be deemed unreliable. Unsure how much money they have, banks could not settle their books when markets close. Settlement is a legally defined, binding moment. Regulators and central banks would become agitated if they could not see how solvent the nation’s banks were at the end of the financial day.

In many aspects of our society, as attackers become more powerful the potential for catastrophe increases. We need to ensure that the likelihood of catastrophe remains low.