Tag Archives: criminal

Research into the Root Causes of Terrorism

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/research_into_t_1.html

Interesting article in Science discussing field research on how people are radicalized to become terrorists.

The potential for research that can overcome existing constraints can be seen in recent advances in understanding violent extremism and, partly, in interdiction and prevention. Most notable is waning interest in simplistic root-cause explanations of why individuals become violent extremists (e.g., poverty, lack of education, marginalization, foreign occupation, and religious fervor), which cannot accommodate the richness and diversity of situations that breed terrorism or support meaningful interventions. A more tractable line of inquiry is how people actually become involved in terror networks (e.g., how they radicalize and are recruited, move to action, or come to abandon cause and comrades).

Reports from the The Soufan Group, International Center for the Study of Radicalisation (King’s College London), and the Combating Terrorism Center (U.S. Military Academy) indicate that approximately three-fourths of those who join the Islamic State or al-Qaeda do so in groups. These groups often involve preexisting social networks and typically cluster in particular towns and neighborhoods.. This suggests that much recruitment does not need direct personal appeals by organization agents or individual exposure to social media (which would entail a more dispersed recruitment pattern). Fieldwork is needed to identify the specific conditions under which these processes play out. Natural growth models of terrorist networks then might be based on an epidemiology of radical ideas in host social networks rather than built in the abstract then fitted to data and would allow for a public health, rather than strictly criminal, approach to violent extremism.

Such considerations have implications for countering terrorist recruitment. The present USG focus is on “counternarratives,” intended as alternative to the “ideologies” held to motivate terrorists. This strategy treats ideas as disembodied from the human conditions in which they are embedded and given life as animators of social groups. In their stead, research and policy might better focus on personalized “counterengagement,” addressing and harnessing the fellowship, passion, and purpose of people within specific social contexts, as ISIS and al-Qaeda often do. This focus stands in sharp contrast to reliance on negative mass messaging and sting operations to dissuade young people in doubt through entrapment and punishment (the most common practice used in U.S. law enforcement) rather than through positive persuasion and channeling into productive life paths. At the very least, we need field research in communities that is capable of capturing evidence to reveal which strategies are working, failing, or backfiring.

Security and the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/security_and_th.html

Last year, on October 21, your digital video recorder ­- or at least a DVR like yours ­- knocked Twitter off the internet. Someone used your DVR, along with millions of insecure webcams, routers, and other connected devices, to launch an attack that started a chain reaction, resulting in Twitter, Reddit, Netflix, and many sites going off the internet. You probably didn’t realize that your DVR had that kind of power. But it does.

All computers are hackable. This has as much to do with the computer market as it does with the technologies. We prefer our software full of features and inexpensive, at the expense of security and reliability. That your computer can affect the security of Twitter is a market failure. The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster.

In this article I want to outline the problems, both technical and political, and point to some regulatory solutions. Regulation might be a dirty word in today’s political climate, but security is the exception to our small-government bias. And as the threats posed by computers become greater and more catastrophic, regulation will be inevitable. So now’s the time to start thinking about it.

We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized.

If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much.

**********

We no longer have things with computers embedded in them. We have computers with things attached to them.

Your modern refrigerator is a computer that keeps things cold. Your oven, similarly, is a computer that makes things hot. An ATM is a computer with money inside. Your car is no longer a mechanical device with some computers inside; it’s a computer with four wheels and an engine. Actually, it’s a distributed system of over 100 computers with four wheels and an engine. And, of course, your phones became full-power general-purpose computers in 2007, when the iPhone was introduced.

We wear computers: fitness trackers and computer-enabled medical devices ­- and, of course, we carry our smartphones everywhere. Our homes have smart thermostats, smart appliances, smart door locks, even smart light bulbs. At work, many of those same smart devices are networked together with CCTV cameras, sensors that detect customer movements, and everything else. Cities are starting to embed smart sensors in roads, streetlights, and sidewalk squares, also smart energy grids and smart transportation networks. A nuclear power plant is really just a computer that produces electricity, and ­- like everything else we’ve just listed -­ it’s on the internet.

The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.

Broadly speaking, the Internet of Things has three parts. There are the sensors that collect data about us and our environment: smart thermostats, street and highway sensors, and those ubiquitous smartphones with their motion sensors and GPS location receivers. Then there are the “smarts” that figure out what the data means and what to do about it. This includes all the computer processors on these devices and ­- increasingly ­- in the cloud, as well as the memory that stores all of this information. And finally, there are the actuators that affect our environment. The point of a smart thermostat isn’t to record the temperature; it’s to control the furnace and the air conditioner. Driverless cars collect data about the road and the environment to steer themselves safely to their destinations.

You can think of the sensors as the eyes and ears of the internet. You can think of the actuators as the hands and feet of the internet. And you can think of the stuff in the middle as the brain. We are building an internet that senses, thinks, and acts.

This is the classic definition of a robot. We’re building a world-size robot, and we don’t even realize it.

To be sure, it’s not a robot in the classical sense. We think of robots as discrete autonomous entities, with sensors, brain, and actuators all together in a metal shell. The world-size robot is distributed. It doesn’t have a singular body, and parts of it are controlled in different ways by different people. It doesn’t have a central brain, and it has nothing even remotely resembling a consciousness. It doesn’t have a single goal or focus. It’s not even something we deliberately designed. It’s something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world.

This world-size robot is actually more than the Internet of Things. It’s a combination of several decades-old computing trends: mobile computing, cloud computing, always-on computing, huge databases of personal information, the Internet of Things ­- or, more precisely, cyber-physical systems ­- autonomy, and artificial intelligence. And while it’s still not very smart, it’ll get smarter. It’ll get more powerful and more capable through all the interconnections we’re building.

It’ll also get much more dangerous.

**********

Computer security has been around for almost as long as computers have been. And while it’s true that security wasn’t part of the design of the original internet, it’s something we have been trying to achieve since its beginning.

I have been working in computer security for over 30 years: first in cryptography, then more generally in computer and network security, and now in general security technology. I have watched computers become ubiquitous, and have seen firsthand the problems ­- and solutions ­- of securing these complex machines and systems. I’m telling you all this because what used to be a specialized area of expertise now affects everything. Computer security is now everything security. There’s one critical difference, though: The threats have become greater.

Traditionally, computer security is divided into three categories: confidentiality, integrity, and availability. For the most part, our security concerns have largely centered around confidentiality. We’re concerned about our data and who has access to it ­- the world of privacy and surveillance, of data theft and misuse.

But threats come in many forms. Availability threats: computer viruses that delete our data, or ransomware that encrypts our data and demands payment for the unlock key. Integrity threats: hackers who can manipulate data entries can do things ranging from changing grades in a class to changing the amount of money in bank accounts. Some of these threats are pretty bad. Hospitals have paid tens of thousands of dollars to criminals whose ransomware encrypted critical medical files. JPMorgan Chase spends half a billion on cybersecurity a year.

Today, the integrity and availability threats are much worse than the confidentiality threats. Once computers start affecting the world in a direct and physical manner, there are real risks to life and property. There is a fundamental difference between crashing your computer and losing your spreadsheet data, and crashing your pacemaker and losing your life. This isn’t hyperbole; recently researchers found serious security vulnerabilities in St. Jude Medical’s implantable heart devices. Give the internet hands and feet, and it will have the ability to punch and kick.

Take a concrete example: modern cars, those computers on wheels. The steering wheel no longer turns the axles, nor does the accelerator pedal change the speed. Every move you make in a car is processed by a computer, which does the actual controlling. A central computer controls the dashboard. There’s another in the radio. The engine has 20 or so computers. These are all networked, and increasingly autonomous.

Now, let’s start listing the security threats. We don’t want car navigation systems to be used for mass surveillance, or the microphone for mass eavesdropping. We might want it to be used to determine a car’s location in the event of a 911 call, and possibly to collect information about highway congestion. We don’t want people to hack their own cars to bypass emissions-control limitations. We don’t want manufacturers or dealers to be able to do that, either, as Volkswagen did for years. We can imagine wanting to give police the ability to remotely and safely disable a moving car; that would make high-speed chases a thing of the past. But we definitely don’t want hackers to be able to do that. We definitely don’t want them disabling the brakes in every car without warning, at speed. As we make the transition from driver-controlled cars to cars with various driver-assist capabilities to fully driverless cars, we don’t want any of those critical components subverted. We don’t want someone to be able to accidentally crash your car, let alone do it on purpose. And equally, we don’t want them to be able to manipulate the navigation software to change your route, or the door-lock controls to prevent you from opening the door. I could go on.

That’s a lot of different security requirements, and the effects of getting them wrong range from illegal surveillance to extortion by ransomware to mass death.

**********

Our computers and smartphones are as secure as they are because companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support large, dedicated teams because those companies make a huge amount of money, either directly or indirectly, from their software ­ and, in part, compete on its security. Unfortunately, this isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

At a recent hacker conference, a security researcher analyzed 30 home routers and was able to break into half of them, including some of the most popular and common brands. The denial-of-service attacks that forced popular websites like Reddit and Twitter off the internet last October were enabled by vulnerabilities in devices like webcams and digital video recorders. In August, two security researchers demonstrated a ransomware attack on a smart thermostat.

Even worse, most of these devices don’t have any way to be patched. Companies like Microsoft and Apple continuously deliver security patches to your computers. Some home routers are technically patchable, but in a complicated way that only an expert would attempt. And the only way for you to update the firmware in your hackable DVR is to throw it away and buy a new one.

The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

**********

Security is an arms race between attacker and defender. Technology perturbs that arms race by changing the balance between attacker and defender. Understanding how this arms race has unfolded on the internet is essential to understanding why the world-size robot we’re building is so insecure, and how we might secure it. To that end, I have five truisms, born from what we’ve already learned about computer and internet security. They will soon affect the security arms race everywhere.

Truism No. 1: On the internet, attack is easier than defense.

There are many reasons for this, but the most important is the complexity of these systems. More complexity means more people involved, more parts, more interactions, more mistakes in the design and development process, more of everything where hidden insecurities can be found. Computer-security experts like to speak about the attack surface of a system: all the possible points an attacker might target and that must be secured. A complex system means a large attack surface. The defender has to secure the entire attack surface. The attacker just has to find one vulnerability ­- one unsecured avenue for attack -­ and gets to choose how and when to attack. It’s simply not a fair battle.

There are other, more general, reasons why attack is easier than defense. Attackers have a natural agility that defenders often lack. They don’t have to worry about laws, and often not about morals or ethics. They don’t have a bureaucracy to contend with, and can more quickly make use of technical innovations. Attackers also have a first-mover advantage. As a society, we’re generally terrible at proactive security; we rarely take preventive security measures until an attack actually happens. So more advantages go to the attacker.

Truism No. 2: Most software is poorly written and insecure.

If complexity isn’t enough, we compound the problem by producing lousy software. Well-written software, like the kind found in airplane avionics, is both expensive and time-consuming to produce. We don’t want that. For the most part, poorly written software has been good enough. We’d all rather live with buggy software than pay the prices good software would require. We don’t mind if our games crash regularly, or our business applications act weird once in a while. Because software has been largely benign, it hasn’t mattered. This has permeated the industry at all levels. At universities, we don’t teach how to code well. Companies don’t reward quality code in the same way they reward fast and cheap. And we consumers don’t demand it.

But poorly written software is riddled with bugs, sometimes as many as one per 1,000 lines of code. Some of them are inherent in the complexity of the software, but most are programming mistakes. Not all bugs are vulnerabilities, but some are.

Truism No. 3: Connecting everything to each other via the internet will expose new vulnerabilities.

The more we network things together, the more vulnerabilities on one thing will affect other things. On October 21, vulnerabilities in a wide variety of embedded devices were all harnessed together to create what hackers call a botnet. This botnet was used to launch a distributed denial-of-service attack against a company called Dyn. Dyn provided a critical internet function for many major internet sites. So when Dyn went down, so did all those popular websites.

These chains of vulnerabilities are everywhere. In 2012, journalist Mat Honan suffered a massive personal hack because of one of them. A vulnerability in his Amazon account allowed hackers to get into his Apple account, which allowed them to get into his Gmail account. And in 2013, the Target Corporation was hacked by someone stealing credentials from its HVAC contractor.

Vulnerabilities like these are particularly hard to fix, because no one system might actually be at fault. It might be the insecure interaction of two individually secure systems.

Truism No. 4: Everybody has to stop the best attackers in the world.

One of the most powerful properties of the internet is that it allows things to scale. This is true for our ability to access data or control systems or do any of the cool things we use the internet for, but it’s also true for attacks. In general, fewer attackers can do more damage because of better technology. It’s not just that these modern attackers are more efficient, it’s that the internet allows attacks to scale to a degree impossible without computers and networks.

This is fundamentally different from what we’re used to. When securing my home against burglars, I am only worried about the burglars who live close enough to my home to consider robbing me. The internet is different. When I think about the security of my network, I have to be concerned about the best attacker possible, because he’s the one who’s going to create the attack tool that everyone else will use. The attacker that discovered the vulnerability used to attack Dyn released the code to the world, and within a week there were a dozen attack tools using it.

Truism No. 5: Laws inhibit security research.

The Digital Millennium Copyright Act is a terrible law that fails at its purpose of preventing widespread piracy of movies and music. To make matters worse, it contains a provision that has critical side effects. According to the law, it is a crime to bypass security mechanisms that protect copyrighted work, even if that bypassing would otherwise be legal. Since all software can be copyrighted, it is arguably illegal to do security research on these devices and to publish the result.

Although the exact contours of the law are arguable, many companies are using this provision of the DMCA to threaten researchers who expose vulnerabilities in their embedded systems. This instills fear in researchers, and has a chilling effect on research, which means two things: (1) Vendors of these devices are more likely to leave them insecure, because no one will notice and they won’t be penalized in the market, and (2) security engineers don’t learn how to do security better.
Unfortunately, companies generally like the DMCA. The provisions against reverse-engineering spare them the embarrassment of having their shoddy security exposed. It also allows them to build proprietary systems that lock out competition. (This is an important one. Right now, your toaster cannot force you to only buy a particular brand of bread. But because of this law and an embedded computer, your Keurig coffee maker can force you to buy a particular brand of coffee.)

**********
In general, there are two basic paradigms of security. We can either try to secure something well the first time, or we can make our security agile. The first paradigm comes from the world of dangerous things: from planes, medical devices, buildings. It’s the paradigm that gives us secure design and secure engineering, security testing and certifications, professional licensing, detailed preplanning and complex government approvals, and long times-to-market. It’s security for a world where getting it right is paramount because getting it wrong means people dying.

The second paradigm comes from the fast-moving and heretofore largely benign world of software. In this paradigm, we have rapid prototyping, on-the-fly updates, and continual improvement. In this paradigm, new vulnerabilities are discovered all the time and security disasters regularly happen. Here, we stress survivability, recoverability, mitigation, adaptability, and muddling through. This is security for a world where getting it wrong is okay, as long as you can respond fast enough.

These two worlds are colliding. They’re colliding in our cars -­ literally -­ in our medical devices, our building control systems, our traffic control systems, and our voting machines. And although these paradigms are wildly different and largely incompatible, we need to figure out how to make them work together.

So far, we haven’t done very well. We still largely rely on the first paradigm for the dangerous computers in cars, airplanes, and medical devices. As a result, there are medical systems that can’t have security patches installed because that would invalidate their government approval. In 2015, Chrysler recalled 1.4 million cars to fix a software vulnerability. In September 2016, Tesla remotely sent a security patch to all of its Model S cars overnight. Tesla sure sounds like it’s doing things right, but what vulnerabilities does this remote patch feature open up?

**********
Until now we’ve largely left computer security to the market. Because the computer and network products we buy and use are so lousy, an enormous after-market industry in computer security has emerged. Governments, companies, and people buy the security they think they need to secure themselves. We’ve muddled through well enough, but the market failures inherent in trying to secure this world-size robot will soon become too big to ignore.

Markets alone can’t solve our security problems. Markets are motivated by profit and short-term goals at the expense of society. They can’t solve collective-action problems. They won’t be able to deal with economic externalities, like the vulnerabilities in DVRs that resulted in Twitter going offline. And we need a counterbalancing force to corporate power.

This all points to policy. While the details of any computer-security system are technical, getting the technologies broadly deployed is a problem that spans law, economics, psychology, and sociology. And getting the policy right is just as important as getting the technology right because, for internet security to work, law and technology have to work together. This is probably the most important lesson of Edward Snowden’s NSA disclosures. We already knew that technology can subvert law. Snowden demonstrated that law can also subvert technology. Both fail unless each work. It’s not enough to just let technology do its thing.

Any policy changes to secure this world-size robot will mean significant government regulation. I know it’s a sullied concept in today’s world, but I don’t see any other possible solution. It’s going to be especially difficult on the internet, where its permissionless nature is one of the best things about it and the underpinning of its most world-changing innovations. But I don’t see how that can continue when the internet can affect the world in a direct and physical manner.

**********

I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out.

We have a practical problem when it comes to internet regulation. There’s no government structure to tackle this at a systemic level. Instead, there’s a fundamental mismatch between the way government works and the way this technology works that makes dealing with this problem impossible at the moment.

Government operates in silos. In the U.S., the FAA regulates aircraft. The NHTSA regulates cars. The FDA regulates medical devices. The FCC regulates communications devices. The FTC protects consumers in the face of “unfair” or “deceptive” trade practices. Even worse, who regulates data can depend on how it is used. If data is used to influence a voter, it’s the Federal Election Commission’s jurisdiction. If that same data is used to influence a consumer, it’s the FTC’s. Use those same technologies in a school, and the Department of Education is now in charge. Robotics will have its own set of problems, and no one is sure how that is going to be regulated. Each agency has a different approach and different rules. They have no expertise in these new issues, and they are not quick to expand their authority for all sorts of reasons.

Compare that with the internet. The internet is a freewheeling system of integrated objects and networks. It grows horizontally, demolishing old technological barriers so that people and systems that never previously communicated now can. Already, apps on a smartphone can log health information, control your energy use, and communicate with your car. That’s a set of functions that crosses jurisdictions of at least four different government agencies, and it’s only going to get worse.

Our world-size robot needs to be viewed as a single entity with millions of components interacting with each other. Any solutions here need to be holistic. They need to work everywhere, for everything. Whether we’re talking about cars, drones, or phones, they’re all computers.

This has lots of precedent. Many new technologies have led to the formation of new government regulatory agencies. Trains did, cars did, airplanes did. Radio led to the formation of the Federal Radio Commission, which became the FCC. Nuclear power led to the formation of the Atomic Energy Commission, which eventually became the Department of Energy. The reasons were the same in every case. New technologies need new expertise because they bring with them new challenges. Governments need a single agency to house that new expertise, because its applications cut across several preexisting agencies. It’s less that the new agency needs to regulate -­ although that’s often a big part of it -­ and more that governments recognize the importance of the new technologies.

The internet has famously eschewed formal regulation, instead adopting a multi-stakeholder model of academics, businesses, governments, and other interested parties. My hope is that we can keep the best of this approach in any regulatory agency, looking more at the new U.S. Digital Service or the 18F office inside the General Services Administration. Both of those organizations are dedicated to providing digital government services, and both have collected significant expertise by bringing people in from outside of government, and both have learned how to work closely with existing agencies. Any internet regulatory agency will similarly need to engage in a high level of collaborate regulation -­ both a challenge and an opportunity.

I don’t think any of us can predict the totality of the regulations we need to ensure the safety of this world, but here’s a few. We need government to ensure companies follow good security practices: testing, patching, secure defaults -­ and we need to be able to hold companies liable when they fail to do these things. We need government to mandate strong personal data protections, and limitations on data collection and use. We need to ensure that responsible security research is legal and well-funded. We need to enforce transparency in design, some sort of code escrow in case a company goes out of business, and interoperability between devices of different manufacturers, to counterbalance the monopolistic effects of interconnected technologies. Individuals need the right to take their data with them. And internet-enabled devices should retain some minimal functionality if disconnected from the internet

I’m not the only one talking about this. I’ve seen proposals for a National Institutes of Health analog for cybersecurity. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission. I think it needs to be broader: maybe a Department of Technology Policy.

Of course there will be problems. There’s a lack of expertise in these issues inside government. There’s a lack of willingness in government to do the hard regulatory work. Industry is worried about any new bureaucracy: both that it will stifle innovation by regulating too much and that it will be captured by industry and regulate too little. A domestic regulatory agency will have to deal with the fundamentally international nature of the problem.

But government is the entity we use to solve problems like this. Governments have the scope, scale, and balance of interests to address the problems. It’s the institution we’ve built to adjudicate competing social interests and internalize market externalities. Left to their own devices, the market simply can’t. That we’re currently in the middle of an era of low government trust, where many of us can’t imagine government doing anything positive in an area like this, is to our detriment.

Here’s the thing: Governments will get involved, regardless. The risks are too great, and the stakes are too high. Government already regulates dangerous physical systems like cars and medical devices. And nothing motivates the U.S. government like fear. Remember 2001? A nominally small-government Republican president created the Office of Homeland Security 11 days after the terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for over a decade. A fatal disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important, and complex; and they’re coming. We can’t afford to ignore these issues until it’s too late.

We also need to start disconnecting systems. If we cannot secure complex systems to the level required by their real-world capabilities, then we must not build a world where everything is computerized and interconnected.

There are other models. We can enable local communications only. We can set limits on collected and stored data. We can deliberately design systems that don’t interoperate with each other. We can deliberately fetter devices, reversing the current trend of turning everything into a general-purpose computer. And, most important, we can move toward less centralization and more distributed systems, which is how the internet was first envisioned.

This might be a heresy in today’s race to network everything, but large, centralized systems are not inevitable. The technical elites are pushing us in that direction, but they really don’t have any good supporting arguments other than the profits of their ever-growing multinational corporations.

But this will change. It will change not only because of security concerns, it will also change because of political concerns. We’re starting to chafe under the worldview of everything producing data about us and what we do, and that data being available to both governments and corporations. Surveillance capitalism won’t be the business model of the internet forever. We need to change the fabric of the internet so that evil governments don’t have the tools to create a horrific totalitarian state. And while good laws and regulations in Western democracies are a great second line of defense, they can’t be our only line of defense.

My guess is that we will soon reach a high-water mark of computerization and connectivity, and that afterward we will make conscious decisions about what and how we decide to interconnect. But we’re still in the honeymoon phase of connectivity. Governments and corporations are punch-drunk on our data, and the rush to connect everything is driven by an even greater desire for power and market share. One of the presentations released by Edward Snowden contained the NSA mantra: “Collect it all.” A similar mantra for the internet today might be: “Connect it all.”

The inevitable backlash will not be driven by the market. It will be deliberate policy decisions that put the safety and welfare of society above individual corporations and industries. It will be deliberate policy decisions that prioritize the security of our systems over the demands of the FBI to weaken them in order to make their law-enforcement jobs easier. It’ll be hard policy for many to swallow, but our safety will depend on it.

**********

The scenarios I’ve outlined, both the technological and economic trends that are causing them and the political changes we need to make to start to fix them, come from my years of working in internet-security technology and policy. All of this is informed by an understanding of both technology and policy. That turns out to be critical, and there aren’t enough people who understand both.

This brings me to my final plea: We need more public-interest technologists.

Over the past couple of decades, we’ve seen examples of getting internet-security policy badly wrong. I’m thinking of the FBI’s “going dark” debate about its insistence that computer devices be designed to facilitate government access, the “vulnerability equities process” about when the government should disclose and fix a vulnerability versus when it should use it to attack other systems, the debacle over paperless touch-screen voting machines, and the DMCA that I discussed above. If you watched any of these policy debates unfold, you saw policy-makers and technologists talking past each other.

Our world-size robot will exacerbate these problems. The historical divide between Washington and Silicon Valley -­ the mistrust of governments by tech companies and the mistrust of tech companies by governments ­- is dangerous.

We have to fix this. Getting IoT security right depends on the two sides working together and, even more important, having people who are experts in each working on both. We need technologists to get involved in policy, and we need policy-makers to get involved in technology. We need people who are experts in making both technology and technological policy. We need technologists on congressional staffs, inside federal agencies, working for NGOs, and as part of the press. We need to create a viable career path for public-interest technologists, much as there already is one for public-interest attorneys. We need courses, and degree programs in colleges, for people interested in careers in public-interest technology. We need fellowships in organizations that need these people. We need technology companies to offer sabbaticals for technologists wanting to go down this path. We need an entire ecosystem that supports people bridging the gap between technology and law. We need a viable career path that ensures that even though people in this field won’t make as much as they would in a high-tech start-up, they will have viable careers. The security of our computerized and networked future ­ meaning the security of ourselves, families, homes, businesses, and communities ­ depends on it.

This plea is bigger than security, actually. Pretty much all of the major policy debates of this century will have a major technological component. Whether it’s weapons of mass destruction, robots drastically affecting employment, climate change, food safety, or the increasing ubiquity of ever-shrinking drones, understanding the policy means understanding the technology. Our society desperately needs technologists working on the policy. The alternative is bad policy.

**********

The world-size robot is less designed than created. It’s coming without any forethought or architecting or planning; most of us are completely unaware of what we’re building. In fact, I am not convinced we can actually design any of this. When we try to design complex sociotechnical systems like this, we are regularly surprised by their emergent properties. The best we can do is observe and channel these properties as best we can.

Market thinking sometimes makes us lose sight of the human choices and autonomy at stake. Before we get controlled ­ or killed ­ by the world-size robot, we need to rebuild confidence in our collective governance institutions. Law and policy may not seem as cool as digital tech, but they’re also places of critical innovation. They’re where we collectively bring about the world we want to live in.

While I might sound like a Cassandra, I’m actually optimistic about our future. Our society has tackled bigger problems than this one. It takes work and it’s not easy, but we eventually find our way clear to make the hard choices necessary to solve our real problems.

The world-size robot we’re building can only be managed responsibly if we start making real choices about the interconnected world we live in. Yes, we need security systems as robust as the threat landscape. But we also need laws that effectively regulate these dangerous technologies. And, more generally, we need to make moral, ethical, and political decisions on how those systems should work. Until now, we’ve largely left the internet alone. We gave programmers a special right to code cyberspace as they saw fit. This was okay because cyberspace was separate and relatively unimportant: That is, it didn’t matter. Now that that’s changed, we can no longer give programmers and the companies they work for this power. Those moral, ethical, and political decisions need, somehow, to be made by everybody. We need to link people with the same zeal that we are currently linking machines. “Connect it all” must be countered with “connect us all.”

This essay previously appeared in New York Magazine.

FedRAMP Compliance Update: AWS GovCloud (US) Region Receives a JAB-Issued FedRAMP High Baseline P-ATO for Three New Services

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/fedramp-compliance-update-aws-govcloud-us-region-receives-a-jab-issued-fedramp-high-baseline-p-ato-for-three-new-services/

FedRAMP logo

Three new services in the AWS GovCloud (US) region have received a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB) under the Federal Risk and Authorization Management Program (FedRAMP). JAB issued the authorization at the High baseline, which enables US government agencies and their service providers the capability to use these services to process the government’s most sensitive unclassified data, including Personal Identifiable Information (PII), Protected Health Information (PHI), Controlled Unclassified Information (CUI), criminal justice information (CJI), and financial data.

On January 5, 2017, JAB assessed and authorized the following AWS services at the FedRAMP High baseline in the AWS GovCloud (US) Region:

By achieving this milestone, our FedRAMP-authorized service offering now enables you to quickly and easily develop databases to not only manage data but also to secure and monitor access.

You can address your most stringent regulatory and compliance requirements while achieving your mission in the AWS GovCloud (US) Region. Learn about AWS and FedRAMP compliance or contact us.

– Chris

Security Risks of TSA PreCheck

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/12/security_risks_12.html

Former TSA Administrator Kip Hawley wrote an op-ed pointing out the security vulnerabilities in the TSA’s PreCheck program:

The first vulnerability in the system is its enrollment process, which seeks to verify an applicant’s identity. We know verification is a challenge: A 2011 Government Accountability Office report on TSA’s system for checking airport workers’ identities concluded that it was “not designed to provide reasonable assurance that only qualified applicants” got approved. It’s not a stretch to believe a reasonably competent terrorist could construct an identity that would pass PreCheck’s front end.

The other step in PreCheck’s “intelligence-driven, risk-based security strategy” is absurd on its face: The absence of negative information about a person doesn’t mean he or she is trustworthy. News reports are filled with stories of people who seemed to be perfectly normal right up to the moment they committed a heinous act. There is no screening algorithm and no database check that can accurately predict human behavior — especially on the scale of millions. It is axiomatic that terrorist organizations recruit operatives who have clean backgrounds and interview well.

None of this is news.

Back in 2004, I wrote:

Imagine you’re a terrorist plotter with half a dozen potential terrorists at your disposal. They all apply for a card, and three get one. Guess which are going on the mission? And they’ll buy round-trip tickets with credit cards and have a “normal” amount of luggage with them.

What the Trusted Traveler program does is create two different access paths into the airport: high security and low security. The intent is that only good guys will take the low-security path, and the bad guys will be forced to take the high-security path, but it rarely works out that way. You have to assume that the bad guys will find a way to take the low-security path.

The Trusted Traveler program is based on the dangerous myth that terrorists match a particular profile and that we can somehow pick terrorists out of a crowd if we only can identify everyone. That’s simply not true. Most of the 9/11 terrorists were unknown and not on any watch list. Timothy McVeigh was an upstanding US citizen before he blew up the Oklahoma City Federal Building. Palestinian suicide bombers in Israel are normal, nondescript people. Intelligence reports indicate that Al Qaeda is recruiting non-Arab terrorists for US operations.

I wrote much the same thing in 2007:

Background checks are based on the dangerous myth that we can somehow pick terrorists out of a crowd if we could identify everyone. Unfortunately, there isn’t any terrorist profile that prescreening can uncover. Timothy McVeigh could probably have gotten one of these cards. So could have Eric Rudolph, the pipe bomber at the 1996 Olympic Games in Atlanta. There isn’t even a good list of known terrorists to check people against; the government list used by the airlines has been the butt of jokes for years.

And have we forgotten how prevalent identity theft is these days? If you think having a criminal impersonating you to your bank is bad, wait until they start impersonating you to the Transportation Security Administration.

The truth is that whenever you create two paths through security — a high-security path and a low-security path — you have to assume that the bad guys will find a way to exploit the low-security path. It may be counterintuitive, but we are all safer if the people chosen for more thorough screening are truly random and not based on an error-filled database or a cursory background check.

In a companion blog post, Hawley has more details about why the program doesn’t work:

In the sense that PreCheck bars people who were identified by intelligence or law enforcement agencies as possible terrorists, then it was intelligence-driven. But using that standard for PreCheck is ridiculous since those people already get extra screening or are on the No-Fly list. The movie Patriots Day, out now, reminds us of the tragic and preventable Boston Marathon bombing. The FBI sent agents to talk to the Tsarnaev brothers and investigate them as possible terror suspects. And cleared them. Even they did not meet the “intelligence-driven” definition used in PreCheck.

The other problem with “intelligence-driven” in the PreCheck context is that intelligence actually tells us the opposite; specifically that terrorists pick clean operatives. If TSA uses current intelligence to evaluate risk, it would not be out enrolling everybody they can into pre-9/11 security for everybody not flagged by the security services.

Hawley and I may agree on the problem, but we have completely opposite solutions. The op-ed was too short to include details, but they’re in a companion blog post. Basically, he wants to screen PreCheck passengers more:

In the interests of space, I left out details of what I would suggest as short-and medium-term solutions. Here are a few ideas:

  • Immediately scrub the PreCheck enrollees for false identities. That can probably be accomplished best and most quickly by getting permission from members, and then using, commercial data. If the results show that PreCheck has already been penetrated, the program should be suspended.
  • Deploy K-9 teams at PreCheck lanes.

  • Use Behaviorally trained officers to interact with and check the credentials of PreCheck passengers.

  • Use Explosives Trace Detection cotton swabs on PreCheck passengers at a much higher rate. Same with removing shoes.

  • Turn on the body scanners and keep them fully utilized.

  • Allow liquids to stay in the carry-on since TSA scanners can detect threat liquids.

  • Work with the airlines to keep the PreCheck experience positive.

  • Work with airports to place PreCheck lanes away from regular checkpoints so as not to diminish lane capacity for non-PreCheck passengers. Rental Car check-in areas could be one alternative. Also, downtown check-in and screening (with secure transport to the airport) is a possibility.

These solutions completely ignore the data from the real-world experiment PreCheck has been. Hawley writes that PreCheck tells us that “terrorists pick clean operatives.” That’s exactly wrong. PreCheck tells us that, basically, there are no terrorists. If 1) it’s an easier way through airport security that terrorists will invariably use, and 2) there have been no instances of terrorists using it in the 10+ years it and its predecessors have been in operation, then the inescapable conclusion is that the threat is minimal. Instead of screening PreCheck passengers more, we should screen everybody else less. This is me in 2012: “I think the PreCheck level of airport screening is what everyone should get, and that the no-fly list and the photo ID check add nothing to security.”

I agree with Hawley that we need to overhaul airport security. Me in 2010: “Airport security is the last line of defense, and it’s not a very good one.” We need to recognize that the actual risk is much lower than we fear, and ratchet airport security down accordingly. And then we need to continue to invest in investigation and intelligence: security measures that work regardless of the tactic or target.

Mass Spectrometry for Surveillance

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/mass_spectromet.html

Yet another way to collect personal data on people without their knowledge or consent: “Lifestyle chemistries from phones for individual profiling“:

Abstract: Imagine a scenario where personal belongings such as pens, keys, phones, or handbags are found at an investigative site. It is often valuable to the investigative team that is trying to trace back the belongings to an individual to understand their personal habits, even when DNA evidence is also available. Here, we develop an approach to translate chemistries recovered from personal objects such as phones into a lifestyle sketch of the owner, using mass spectrometry and informatics approaches. Our results show that phones’ chemistries reflect a personalized lifestyle profile. The collective repertoire of molecules found on these objects provides a sketch of the lifestyle of an individual by highlighting the type of hygiene/beauty products the person uses, diet, medical status, and even the location where this person may have been. These findings introduce an additional form of trace evidence from skin-associated lifestyle chemicals found on personal belongings. Such information could help a criminal investigator narrowing down the owner of an object found at a crime scene, such as a suspect or missing person.

News article.

Lessons From the Dyn DDoS Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/lessons_from_th_5.html

A week ago Friday, someone took down numerous popular websites in a massive distributed denial-of-service (DDoS) attack against the domain name provider Dyn. DDoS attacks are neither new nor sophisticated. The attacker sends a massive amount of traffic, causing the victim’s system to slow to a crawl and eventually crash. There are more or less clever variants, but basically, it’s a datapipe-size battle between attacker and victim. If the defender has a larger capacity to receive and process data, he or she will win. If the attacker can throw more data than the victim can process, he or she will win.

The attacker can build a giant data cannon, but that’s expensive. It is much smarter to recruit millions of innocent computers on the internet. This is the “distributed” part of the DDoS attack, and pretty much how it’s worked for decades. Cybercriminals infect innocent computers around the internet and recruit them into a botnet. They then target that botnet against a single victim.

You can imagine how it might work in the real world. If I can trick tens of thousands of others to order pizzas to be delivered to your house at the same time, I can clog up your street and prevent any legitimate traffic from getting through. If I can trick many millions, I might be able to crush your house from the weight. That’s a DDoS attack ­ it’s simple brute force.

As you’d expect, DDoSers have various motives. The attacks started out as a way to show off, then quickly transitioned to a method of intimidation ­ or a way of just getting back at someone you didn’t like. More recently, they’ve become vehicles of protest. In 2013, the hacker group Anonymous petitioned the White House to recognize DDoS attacks as a legitimate form of protest. Criminals have used these attacks as a means of extortion, although one group found that just the fear of attack was enough. Military agencies are also thinking about DDoS as a tool in their cyberwar arsenals. A 2007 DDoS attack against Estonia was blamed on Russia and widely called an act of cyberwar.

The DDoS attack against Dyn two weeks ago was nothing new, but it illustrated several important trends in computer security.

These attack techniques are broadly available. Fully capable DDoS attack tools are available for free download. Criminal groups offer DDoS services for hire. The particular attack technique used against Dyn was first used a month earlier. It’s called Mirai, and since the source code was released four weeks ago, over a dozen botnets have incorporated the code.

The Dyn attacks were probably not originated by a government. The perpetrators were most likely hackers mad at Dyn for helping Brian Krebs identify ­ and the FBI arrest ­ two Israeli hackers who were running a DDoS-for-hire ring. Recently I have written about probing DDoS attacks against internet infrastructure companies that appear to be perpetrated by a nation-state. But, honestly, we don’t know for sure.

This is important. Software spreads capabilities. The smartest attacker needs to figure out the attack and write the software. After that, anyone can use it. There’s not even much of a difference between government and criminal attacks. In December 2014, there was a legitimate debate in the security community as to whether the massive attack against Sony had been perpetrated by a nation-state with a $20 billion military budget or a couple of guys in a basement somewhere. The internet is the only place where we can’t tell the difference. Everyone uses the same tools, the same techniques and the same tactics.

These attacks are getting larger. The Dyn DDoS attack set a record at 1.2 Tbps. The previous record holder was the attack against cybersecurity journalist Brian Krebs a month prior at 620 Gbps. This is much larger than required to knock the typical website offline. A year ago, it was unheard of. Now it occurs regularly.

The botnets attacking Dyn and Brian Krebs consisted largely of unsecure Internet of Things (IoT) devices ­ webcams, digital video recorders, routers and so on. This isn’t new, either. We’ve already seen internet-enabled refrigerators and TVs used in DDoS botnets. But again, the scale is bigger now. In 2014, the news was hundreds of thousands of IoT devices ­ the Dyn attack used millions. Analysts expect the IoT to increase the number of things on the internet by a factor of 10 or more. Expect these attacks to similarly increase.

The problem is that these IoT devices are unsecure and likely to remain that way. The economics of internet security don’t trickle down to the IoT. Commenting on the Krebs attack last month, I wrote:

The market can’t fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don’t care. Their devices were cheap to buy, they still work, and they don’t even know Brian. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

To be fair, one company that made some of the unsecure things used in these attacks recalled its unsecure webcams. But this is more of a publicity stunt than anything else. I would be surprised if the company got many devices back. We already know that the reputational damage from having your unsecure software made public isn’t large and doesn’t last. At this point, the market still largely rewards sacrificing security in favor of price and time-to-market.

DDoS prevention works best deep in the network, where the pipes are the largest and the capability to identify and block the attacks is the most evident. But the backbone providers have no incentive to do this. They don’t feel the pain when the attacks occur and they have no way of billing for the service when they provide it. So they let the attacks through and force the victims to defend themselves. In many ways, this is similar to the spam problem. It, too, is best dealt with in the backbone, but similar economics dump the problem onto the endpoints.

We’re unlikely to get any regulation forcing backbone companies to clean up either DDoS attacks or spam, just as we are unlikely to get any regulations forcing IoT manufacturers to make their systems secure. This is me again:

What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

That leaves the victims to pay. This is where we are in much of computer security. Because the hardware, software and networks we use are so unsecure, we have to pay an entire industry to provide after-the-fact security.

There are solutions you can buy. Many companies offer DDoS protection, although they’re generally calibrated to the older, smaller attacks. We can safely assume that they’ll up their offerings, although the cost might be prohibitive for many users. Understand your risks. Buy mitigation if you need it, but understand its limitations. Know the attacks are possible and will succeed if large enough. And the attacks are getting larger all the time. Prepare for that.

This essay previously appeared on the SecurityIntelligence website.

Of course smart homes are targets for hackers

Post Syndicated from Matthew Garrett original https://mjg59.dreamwidth.org/45483.html

The Wirecutter, an in-depth comparative review site for various electrical and electronic devices, just published an opinion piece on whether users should be worried about security issues in IoT devices. The summary: avoid devices that don’t require passwords (or don’t force you to change a default and devices that want you to disable security, follow general network security best practices but otherwise don’t worry – criminals aren’t likely to target you.

This is terrible, irresponsible advice. It’s true that most users aren’t likely to be individually targeted by random criminals, but that’s a poor threat model. As I’ve mentioned before, you need to worry about people with an interest in you. Making purchasing decisions based on the assumption that you’ll never end up dating someone with enough knowledge to compromise a cheap IoT device (or even meeting an especially creepy one in a bar) is not safe, and giving advice that doesn’t take that into account is a huge disservice to many potentially vulnerable users.

Of course, there’s also the larger question raised by the last week’s problems. Insecure IoT devices still pose a threat to the wider internet, even if the owner’s data isn’t at risk. I may not be optimistic about the ease of fixing this problem, but that doesn’t mean we should just give up. It is important that we improve the security of devices, and many vendors are just bad at that.

So, here’s a few things that should be a minimum when considering an IoT device:

  • Does the vendor publish a security contact? (If not, they don’t care about security)
  • Does the vendor provide frequent software updates, even for devices that are several years old? (If not, they don’t care about security)
  • Has the vendor ever denied a security issue that turned out to be real? (If so, they care more about PR than security)
  • Is the vendor able to provide the source code to any open source components they use? (If not, they don’t know which software is in their own product and so don’t care about security, and also they’re probably infringing my copyright)
  • Do they mark updates as fixing security bugs? (If not, they care more about hiding security issues than fixing them)
  • Has the vendor ever threatened to prosecute a security researcher? (If so, again, they care more about PR than security)
  • Does the vendor provide a public minimum support period for the device? (If not, they don’t care about security or their users)

    I’ve worked with big name vendors who did a brilliant job here. I’ve also worked with big name vendors who responded with hostility when I pointed out that they were selling a device with arbitrary remote code execution. Going with brand names is probably a good proxy for many of these requirements, but it’s insufficient.

    So here’s my recommendations to The Wirecutter – talk to a wide range of security experts about the issues that users should be concerned about, and figure out how to test these things yourself. Don’t just ask vendors whether they care about security, ask them what their processes and procedures look like. Look at their history. And don’t assume that just because nobody’s interested in you, everybody else’s level of risk is equal.

  • comment count unavailable comments

    Of course smart homes are targets for hackers

    Post Syndicated from Matthew Garrett original http://mjg59.dreamwidth.org/45483.html

    The Wirecutter, an in-depth comparative review site for various electrical and electronic devices, just published an opinion piece on whether users should be worried about security issues in IoT devices. The summary: avoid devices that don’t require passwords (or don’t force you to change a default and devices that want you to disable security, follow general network security best practices but otherwise don’t worry – criminals aren’t likely to target you.

    This is terrible, irresponsible advice. It’s true that most users aren’t likely to be individually targeted by random criminals, but that’s a poor threat model. As I’ve mentioned before, you need to worry about people with an interest in you. Making purchasing decisions based on the assumption that you’ll never end up dating someone with enough knowledge to compromise a cheap IoT device (or even meeting an especially creepy one in a bar) is not safe, and giving advice that doesn’t take that into account is a huge disservice to many potentially vulnerable users.

    Of course, there’s also the larger question raised by the last week’s problems. Insecure IoT devices still pose a threat to the wider internet, even if the owner’s data isn’t at risk. I may not be optimistic about the ease of fixing this problem, but that doesn’t mean we should just give up. It is important that we improve the security of devices, and many vendors are just bad at that.

    So, here’s a few things that should be a minimum when considering an IoT device:

  • Does the vendor publish a security contact? (If not, they don’t care about security)
  • Does the vendor provide frequent software updates, even for devices that are several years old? (If not, they don’t care about security)
  • Has the vendor ever denied a security issue that turned out to be real? (If so, they care more about PR than security)
  • Is the vendor able to provide the source code to any open source components they use? (If not, they don’t know which software is in their own product and so don’t care about security, and also they’re probably infringing my copyright)
  • Do they mark updates as fixing security bugs? (If not, they care more about hiding security issues than fixing them)
  • Has the vendor ever threatened to prosecute a security researcher? (If so, again, they care more about PR than security)
  • Does the vendor provide a public minimum support period for the device? (If not, they don’t care about security or their users)

    I’ve worked with big name vendors who did a brilliant job here. I’ve also worked with big name vendors who responded with hostility when I pointed out that they were selling a device with arbitrary remote code execution. Going with brand names is probably a good proxy for many of these requirements, but it’s insufficient.

    So here’s my recommendations to The Wirecutter – talk to a wide range of security experts about the issues that users should be concerned about, and figure out how to test these things yourself. Don’t just ask vendors whether they care about security, ask them what their processes and procedures look like. Look at their history. And don’t assume that just because nobody’s interested in you, everybody else’s level of risk is equal.

  • comment count unavailable comments

    Malicious AI

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/malicious_ai.html

    It’s not hard to imagine the criminal possibilities of automation, autonomy, and artificial intelligence. But the imaginings are becoming mainstream — and the future isn’t too far off.

    Along similar lines, computers are able to predict court verdicts. My guess is that the real use here isn’t to predict actual court verdicts, but for well-paid defense teams to test various defensive tactics.

    Virtual Kidnapping

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/virtual_kidnapp_1.html

    This is a harrowing story of a scam artist that convinced a mother that her daughter had been kidnapped. More stories are here. It’s unclear if these virtual kidnappers use data about their victims, or just call people at random and hope to get lucky. Still, it’s a new criminal use of smartphones and ubiquitous information.

    Reminds me of the scammers who call low-wage workers at retail establishments late at night and convince them to do outlandish and occasionally dangerous things.

    Security Economics of the Internet of Things

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/security_econom_1.html

    Brian Krebs is a popular reporter on the cybersecurity beat. He regularly exposes cybercriminals and their tactics, and consequently is regularly a target of their ire. Last month, he wrote about an online attack-for-hire service that resulted in the arrest of the two proprietors. In the aftermath, his site was taken down by a massive DDoS attack.

    In many ways, this is nothing new. Distributed denial-of-service attacks are a family of attacks that cause websites and other Internet-connected systems to crash by overloading them with traffic. The “distributed” part means that other insecure computers on the Internet — sometimes in the millions­ — are recruited to a botnet to unwittingly participate in the attack. The tactics are decades old; DDoS attacks are perpetrated by lone hackers trying to be annoying, criminals trying to extort money, and governments testing their tactics. There are defenses, and there are companies that offer DDoS mitigation services for hire.

    Basically, it’s a size vs. size game. If the attackers can cobble together a fire hose of data bigger than the defender’s capability to cope with, they win. If the defenders can increase their capability in the face of attack, they win.

    What was new about the Krebs attack was both the massive scale and the particular devices the attackers recruited. Instead of using traditional computers for their botnet, they used CCTV cameras, digital video recorders, home routers, and other embedded computers attached to the Internet as part of the Internet of Things.

    Much has been written about how the IoT is wildly insecure. In fact, the software used to attack Krebs was simple and amateurish. What this attack demonstrates is that the economics of the IoT mean that it will remain insecure unless government steps in to fix the problem. This is a market failure that can’t get fixed on its own.

    Our computers and smartphones are as secure as they are because there are teams of security engineers working on the problem. Companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support such teams because those companies make a huge amount of money, either directly or indirectly, from their software­ — and, in part, compete on its security. This isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

    Even worse, most of these devices don’t have any way to be patched. Even though the source code to the botnet that attacked Krebs has been made public, we can’t update the affected devices. Microsoft delivers security patches to your computer once a month. Apple does it just as regularly, but not on a fixed schedule. But the only way for you to update the firmware in your home router is to throw it away and buy a new one.

    The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn’t true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.

    The market can’t fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don’t care. Their devices were cheap to buy, they still work, and they don’t even know Brian. The sellers of those devices don’t care: they’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: it’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

    What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

    Of course, this would only be a domestic solution to an international problem. The Internet is global, and attackers can just as easily build a botnet out of IoT devices from Asia as from the United States. Long term, we need to build an Internet that is resilient against attacks like this. But that’s a long time coming. In the meantime, you can expect more attacks that leverage insecure IoT devices.

    This essay previously appeared on Vice Motherboard.

    Slashdot thread.

    Here are some of the things that are vulnerable.

    EDITED TO ADD (10/17: DARPA is looking for IoT-security ideas from the private sector.

    The Hacking of Yahoo

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/the_hacking_of_.html

    Last week, Yahoo! announced that it was hacked pretty massively in 2014. Over half a billion usernames and passwords were affected, making this the largest data breach of all time.

    Yahoo! claimed it was a government that did it:

    A recent investigation by Yahoo! Inc. has confirmed that a copy of certain user account information was stolen from the company’s network in late 2014 by what it believes is a state-sponsored actor.

    I did a bunch of press interviews after the hack, and repeatedly said that “state-sponsored actor” is often code for “please don’t blame us for our shoddy security because it was a really sophisticated attacker and we can’t be expected to defend ourselves against that.”

    Well, it turns out that Yahoo! had shoddy security and it was a bunch of criminals that hacked them. The first story is from the New York Times, and outlines the many ways Yahoo! ignored security issues.

    But when it came time to commit meaningful dollars to improve Yahoo’s security infrastructure, Ms. Mayer repeatedly clashed with Mr. Stamos, according to the current and former employees. She denied Yahoo’s security team financial resources and put off proactive security defenses, including intrusion-detection mechanisms for Yahoo’s production systems.

    The second story is from the Wall Street Journal:

    InfoArmor said the hackers, whom it calls “Group E,” have sold the entire Yahoo database at least three times, including one sale to a state-sponsored actor. But the hackers are engaged in a moneymaking enterprise and have “a significant criminal track record,” selling data to other criminals for spam or to affiliate marketers who aren’t acting on behalf of any government, said Andrew Komarov, chief intelligence officer with InfoArmor Inc.

    That is not the profile of a state-sponsored hacker, Mr. Komarov said. “We don’t see any reason to say that it’s state sponsored,” he said. “Their clients are state sponsored, but not the actual hackers.”

    Some technical notes on the PlayPen case

    Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/09/some-technical-notes-on-playpen-case.html

    In March of 2015, the FBI took control of a Tor onion childporn website (“PlayPen”), then used an 0day exploit to upload malware to visitors’s computers, to identify them. There is some controversy over the warrant they used, and government mass hacking in general. However, much of the discussion misses some technical details, which I thought I’d discuss here.

    IP address
    In a post on the case, Orin Kerr claims:

    retrieving IP addresses is clearly a search

    He is wrong, at least, in the general case. Uploading malware to gather other things (hostname, username, MAC address) is clearly a search. But discovering the IP address is a different thing.
    Today’s homes contain many devices behind a single router. The home has only one public IP address, that of the router. All the other devices have local IP addresses. The router then does network address translation (NAT) in order to convert outgoing traffic to all use the public IP address.
    The FBI sought the public IP address of the NAT/router, not the local IP address of the perp’s computer. The malware (“NIT”) didn’t search the computer for the IP address. Instead the NIT generated network traffic, destined to the FBI’s computers. The FBI discovered the suspect’s public IP address by looking at their own computers.
    Historically, there have been similar ways of getting this IP address (from a Tor hidden user) without “hacking”. In the past, Tor used to leak DNS lookups, which would often lead to the user’s ISP, or to the user’s IP address itself. Another technique would be to provide rich content files (like PDF) or video files that the user would have to be downloaded to view, and which then would contact the Internet (contacting the FBI’s computers) themselves bypassing Tor.
    Since the Fourth Amendment is about where the search happens, and not what is discovered, it’s not a search to find the IP address in packets arriving at FBI servers. How the FBI discovered the IP address may be a search (running malware on the suspect’s computer), but the public IP address itself doesn’t necessarily mean a search happened.

    Of course, uploading malware just to transmit packets to an FBI server, getting the IP address from the packets, it’s still problematic. It’s gotta be something that requires a warrant, even though it’s not precisely the malware searching the machine for its IP address.

    In any event, if not for the IP address, then PlayPen searches still happened for the hostname, username, and MAC address. Imagine the FBI gets a search warrant, shows up at the suspect’s house, and finds no child porn. They then look at the WiFi router, and find that suspected MAC address is indeed connected. They then use other tools to find that the device with that MAC address is located in the neighbor’s house — who has been piggybacking off the WiFi.
    It’s a pre-crime warrant (#MinorityReport)
    The warrant allows the exploit/malware/search to be used whenever somebody logs in with a username and password.
    The key thing here is that the warrant includes people who have not yet created an account on the server at the time the warrant is written. They will connect, create an account, log in, then start accessing the site.
    In other words, the warrant includes people who have never committed a crime when the warrant was issued, but who first commit the crime after the warrant. It’s a pre-crime warrant. 
    Sure, it’s possible in any warrant to catch pre-crime. For example, a warrant for a drug dealer may also catch a teenager making their first purchase of drugs. But this seems quantitatively different. It’s not targeting the known/suspected criminal — it’s targeting future criminals.
    This could easily be solved by limiting the warrant to only accounts that have already been created on the server.
    It’s more than an anticipatory warrant

    People keep saying it’s an anticipatory warrant, as if this explains everything.

    I’m not a lawyer, but even I can see that this explains only that the warrant anticipates future probable cause. “Anticipatory warrant” doesn’t explain that the warrant also anticipates future place to be searched. As far as I can tell, “anticipatory place” warrants don’t exist and are a clear violation of the Fourth Amendment. It makes it look like a “general warrant”, which the Fourth Amendment was designed to prevent.

    Orin’s post includes some “unknown place” examples — but those specify something else in particular. A roving wiretap names a person, and the “place” is whatever phone they use. In contrast, this PlayPen warrant names no person. Orin thinks that the problem may be that more than one person is involved, but he is wrong. A warrant can (presumably) name multiple people, or you can have multiple warrants, one for each person. Instead, the problem here is that no person is named. It’s not “Rob’s computer”, it’s “the computer of whoever logs in”. Even if the warrant were ultimately for a single person, it’d still be problematic because the person is not identified.
    Orin cites another case, where the FBI places a beeper into a package in order to track it. The place, in this case, is the package. Again, this is nowhere close to this case, where no specific/particular place is mentioned, only a type of place. 
    This could easily have been resolved. Most accounts were created before the warrant was issued. The warrant could simply have listed all the usernames, saying the computers of those using these accounts are the places to search. It’s a long list of usernames (1,500?), but if you can’t include them all in a single warrant, in this day and age of automation, I’d imagine you could easily create 1,500 warrants.
    It’s malware

    As a techy, the name for what the FBI did is “hacking”, and the name for their software is “malware” not “NIT”. The definitions don’t change depending upon who’s doing it and for what purpose. That the FBI uses weasel words to distract from what it’s doing seems like a violation of some sort of principle.
    Conclusion

    I am not a lawyer, I am a revolutionary. I care less about precedent and more about how a Police State might abuse technology. That a warrant can be issued whose condition is similar “whoever logs into the server” seems like a scary potential for abuse. That a warrant can be designed to catch pre-crime seems even scarier, like science fiction. That a warrant might not be issued for something called “malware”, but would be issued for something called “NIT”, scares me the most.
    This warrant could easily have been narrower. It could have listed all the existing account holders. It could’ve been even narrower, for account holders where the server logs prove they’ve already downloaded child porn.
    Even then, we need to be worried about FBI mass hacking. I agree that FBI has good reason to keep the 0day secret, and that it’s not meaningful to the defense. But in general, I think courts should demand an overabundance of transparency — the police could be doing something nefarious, so the courts should demand transparency to prevent that.

    Someone Is Learning How to Take Down the Internet

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/someone_is_lear.html

    Over the past year or two, someone has been probing the defenses of the companies that run critical pieces of the Internet. These probes take the form of precisely calibrated attacks designed to determine exactly how well these companies can defend themselves, and what would be required to take them down. We don’t know who is doing this, but it feels like a large nation state. China or Russia would be my first guesses.

    First, a little background. If you want to take a network off the Internet, the easiest way to do it is with a distributed denial-of-service attack (DDoS). Like the name says, this is an attack designed to prevent legitimate users from getting to the site. There are subtleties, but basically it means blasting so much data at the site that it’s overwhelmed. These attacks are not new: hackers do this to sites they don’t like, and criminals have done it as a method of extortion. There is an entire industry, with an arsenal of technologies, devoted to DDoS defense. But largely it’s a matter of bandwidth. If the attacker has a bigger fire hose of data than the defender has, the attacker wins.

    Recently, some of the major companies that provide the basic infrastructure that makes the Internet work have seen an increase in DDoS attacks against them. Moreover, they have seen a certain profile of attacks. These attacks are significantly larger than the ones they’re used to seeing. They last longer. They’re more sophisticated. And they look like probing. One week, the attack would start at a particular level of attack and slowly ramp up before stopping. The next week, it would start at that higher point and continue. And so on, along those lines, as if the attacker were looking for the exact point of failure.

    The attacks are also configured in such a way as to see what the company’s total defenses are. There are many different ways to launch a DDoS attack. The more attack vectors you employ simultaneously, the more different defenses the defender has to counter with. These companies are seeing more attacks using three or four different vectors. This means that the companies have to use everything they’ve got to defend themselves. They can’t hold anything back. They’re forced to demonstrate their defense capabilities for the attacker.

    I am unable to give details, because these companies spoke with me under condition of anonymity. But this all is consistent with what Verisign is reporting. Verisign is the registrar for many popular top-level Internet domains, like .com and .net. If it goes down, there’s a global blackout of all websites and e-mail addresses in the most common top-level domains. Every quarter, Verisign publishes a DDoS trends report. While its publication doesn’t have the level of detail I heard from the companies I spoke with, the trends are the same: “in Q2 2016, attacks continued to become more frequent, persistent, and complex.”

    There’s more. One company told me about a variety of probing attacks in addition to the DDoS attacks: testing the ability to manipulate Internet addresses and routes, seeing how long it takes the defenders to respond, and so on. Someone is extensively testing the core defensive capabilities of the companies that provide critical Internet services.

    Who would do this? It doesn’t seem like something an activist, criminal, or researcher would do. Profiling core infrastructure is common practice in espionage and intelligence gathering. It’s not normal for companies to do that. Furthermore, the size and scale of these probes — and especially their persistence — points to state actors. It feels like a nation’s military cybercommand trying to calibrate its weaponry in the case of cyberwar. It reminds me of the US’s Cold War program of flying high-altitude planes over the Soviet Union to force their air-defense systems to turn on, to map their capabilities.

    What can we do about this? Nothing, really. We don’t know where the attacks come from. The data I see suggests China, an assessment shared by the people I spoke with. On the other hand, it’s possible to disguise the country of origin for these sorts of attacks. The NSA, which has more surveillance in the Internet backbone than everyone else combined, probably has a better idea, but unless the US decides to make an international incident over this, we won’t see any attribution.

    But this is happening. And people should know.

    This essay previously appeared on Lawfare.com.

    EDITED TO ADD: Slashdot thread.

    EDITED TO ADD (9/15): Podcast with me on the topic.

    Internet Disinformation Service for Hire

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/internet_disinf.html

    Yet another leaked catalog of Internet attack services, this one specializing in disinformation:

    But Aglaya had much more to offer, according to its brochure. For eight to 12 weeks campaigns costing €2,500 per day, the company promised to “pollute” internet search results and social networks like Facebook and Twitter “to manipulate current events.” For this service, which it labelled “Weaponized Information,” Aglaya offered “infiltration,” “ruse,” and “sting” operations to “discredit a target” such as an “individual or company.”

    “[We] will continue to barrage information till it gains ‘traction’ & top 10 search results yield a desired results on ANY Search engine,” the company boasted as an extra “benefit” of this service.

    Aglaya also offered censorship-as-a-service, or Distributed Denial of Service (DDoS) attacks, for only €600 a day, using botnets to “send dummy traffic” to targets, taking them offline, according to the brochure. As part of this service, customers could buy an add-on to “create false criminal charges against Targets in their respective countries” for a more costly €1 million.

    […]

    Some of Aglaya’s offerings, according to experts who reviewed the document for Motherboard, are likely to be exaggerated or completely made-up. But the document shows that there are governments interested in these services, which means there will be companies willing to fill the gaps in the market and offer them.

    Cybercrime as a Tax on the Internet Economy

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/cybercrime_as_a_1.html

    I was reading this 2014 McAfee report on the economic impact of cybercrime, and came across this interesting quote on how security is a tax on the Internet economy:

    Another way to look at the opportunity cost of cybercrime is to see it as a share of the Internet economy. Studies estimate that the Internet economy annually generates between $2 trillion and $3 trillion,1 a share of the global economy that is expected to grow rapidly. If our estimates are right, cybercrime extracts between 15% and 20% of the value created by the Internet, a heavy tax on the potential for economic growth and job creation and a share of revenue that is significantly larger than any other transnational criminal activity.

    Of course you can argue with the numbers, and there’s good reason to believe that the actual costs of cybercrime are much lower. And, of course, those costs are largely indirect costs. It’s not that cybercriminals are getting away with all that value; it’s largely spent on security products and services from companies like McAfee (and my own IBM Security).

    In Liars and Outliers I talk about security as a tax on the honest.

    The NSA Is Hoarding Vulnerabilities

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/the_nsa_is_hoar.html

    The National Security Agency is lying to us. We know that because of data stolen from an NSA server was dumped on the Internet. The agency is hoarding information about security vulnerabilities in the products you use, because it wants to use it to hack others’ computers. Those vulnerabilities aren’t being reported, and aren’t getting fixed, making your computers and networks unsafe.

    On August 13, a group calling itself the Shadow Brokers released 300 megabytes of NSA cyberweapon code on the Internet. Near as we experts can tell, the NSA network itself wasn’t hacked; what probably happened was that a “staging server” for NSA cyberweapons — that is, a server the NSA was making use of to mask its surveillance activities — was hacked in 2013.

    The NSA inadvertently resecured itself in what was coincidentally the early weeks of the Snowden document release. The people behind the link used casual hacker lingo, and made a weird, implausible proposal involving holding a bitcoin auction for the rest of the data: “!!! Attention government sponsors of cyber warfare and those who profit from it !!!! How much you pay for enemies cyber weapons?”

    Still, most people believe the hack was the work of the Russian government and the data release some sort of political message. Perhaps it was a warning that if the US government exposes the Russians as being behind the hack of the Democratic National Committee — or other high-profile data breaches — the Russians will expose NSA exploits in turn.

    But what I want to talk about is the data. The sophisticated cyberweapons in the data dump include vulnerabilities and “exploit code” that can be deployed against common Internet security systems. Products targeted include those made by Cisco, Fortinet, TOPSEC, Watchguard, and Juniper — systems that are used by both private and government organizations around the world. Some of these vulnerabilities have been independently discovered and fixed since 2013, and some had remained unknown until now.

    All of them are examples of the NSA — despite what it and other representatives of the US government say — prioritizing its ability to conduct surveillance over our security. Here’s one example. Security researcher Mustafa al-Bassam found an attack tool codenamed BENIGHCERTAIN that tricks certain Cisco firewalls into exposing some of their memory, including their authentication passwords. Those passwords can then be used to decrypt virtual private network, or VPN, traffic, completely bypassing the firewalls’ security. Cisco hasn’t sold these firewalls since 2009, but they’re still in use today.

    Vulnerabilities like that one could have, and should have, been fixed years ago. And they would have been, if the NSA had made good on its word to alert American companies and organizations when it had identified security holes.

    Over the past few years, different parts of the US government have repeatedly assured us that the NSA does not hoard “zero days” ­ the term used by security experts for vulnerabilities unknown to software vendors. After we learned from the Snowden documents that the NSA purchases zero-day vulnerabilities from cyberweapons arms manufacturers, the Obama administration announced, in early 2014, that the NSA must disclose flaws in common software so they can be patched (unless there is “a clear national security or law enforcement” use).

    Later that year, National Security Council cybersecurity coordinator and special adviser to the president on cybersecurity issues Michael Daniel insisted that US doesn’t stockpile zero-days (except for the same narrow exemption). An official statement from the White House in 2014 said the same thing.

    The Shadow Brokers data shows this is not true. The NSA hoards vulnerabilities.

    Hoarding zero-day vulnerabilities is a bad idea. It means that we’re all less secure. When Edward Snowden exposed many of the NSA’s surveillance programs, there was considerable discussion about what the agency does with vulnerabilities in common software products that it finds. Inside the US government, the system of figuring out what to do with individual vulnerabilities is called the Vulnerabilities Equities Process (VEP). It’s an inter-agency process, and it’s complicated.

    There is a fundamental tension between attack and defense. The NSA can keep the vulnerability secret and use it to attack other networks. In such a case, we are all at risk of someone else finding and using the same vulnerability. Alternatively, the NSA can disclose the vulnerability to the product vendor and see it gets fixed. In this case, we are all secure against whoever might be using the vulnerability, but the NSA can’t use it to attack other systems.

    There are probably some overly pedantic word games going on. Last year, the NSA said that it discloses 91 percent of the vulnerabilities it finds. Leaving aside the question of whether that remaining 9 percent represents 1, 10, or 1,000 vulnerabilities, there’s the bigger question of what qualifies in the NSA’s eyes as a “vulnerability.”

    Not all vulnerabilities can be turned into exploit code. The NSA loses no attack capabilities by disclosing the vulnerabilities it can’t use, and doing so gets its numbers up; it’s good PR. The vulnerabilities we care about are the ones in the Shadow Brokers data dump. We care about them because those are the ones whose existence leaves us all vulnerable.

    Because everyone uses the same software, hardware, and networking protocols, there is no way to simultaneously secure our systems while attacking their systems ­ whoever “they” are. Either everyone is more secure, or everyone is more vulnerable.

    Pretty much uniformly, security experts believe we ought to disclose and fix vulnerabilities. And the NSA continues to say things that appear to reflect that view, too. Recently, the NSA told everyone that it doesn’t rely on zero days — very much, anyway.

    Earlier this year at a security conference, Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) organization — basically the country’s chief hacker — gave a rare public talk, in which he said that credential stealing is a more fruitful method of attack than are zero days: “A lot of people think that nation states are running their operations on zero days, but it’s not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive.”

    The distinction he’s referring to is the one between exploiting a technical hole in software and waiting for a human being to, say, get sloppy with a password.

    A phrase you often hear in any discussion of the Vulnerabilities Equities Process is NOBUS, which stands for “nobody but us.” Basically, when the NSA finds a vulnerability, it tries to figure out if it is unique in its ability to find it, or whether someone else could find it, too. If it believes no one else will find the problem, it may decline to make it public. It’s an evaluation prone to both hubris and optimism, and many security experts have cast doubt on the very notion that there is some unique American ability to conduct vulnerability research.

    The vulnerabilities in the Shadow Brokers data dump are definitely not NOBUS-level. They are run-of-the-mill vulnerabilities that anyone — another government, cybercriminals, amateur hackers — could discover, as evidenced by the fact that many of them were discovered between 2013, when the data was stolen, and this summer, when it was published. They are vulnerabilities in common systems used by people and companies all over the world.

    So what are all these vulnerabilities doing in a secret stash of NSA code that was stolen in 2013? Assuming the Russians were the ones who did the stealing, how many US companies did they hack with these vulnerabilities? This is what the Vulnerabilities Equities Process is designed to prevent, and it has clearly failed.

    If there are any vulnerabilities that — according to the standards established by the White House and the NSA — should have been disclosed and fixed, it’s these. That they have not been during the three-plus years that the NSA knew about and exploited them — despite Joyce’s insistence that they’re not very important — demonstrates that the Vulnerable Equities Process is badly broken.

    We need to fix this. This is exactly the sort of thing a congressional investigation is for. This whole process needs a lot more transparency, oversight, and accountability. It needs guiding principles that prioritize security over surveillance. A good place to start are the recommendations by Ari Schwartz and Rob Knake in their report: these include a clearly defined and more public process, more oversight by Congress and other independent bodies, and a strong bias toward fixing vulnerabilities instead of exploiting them.

    And as long as I’m dreaming, we really need to separate our nation’s intelligence-gathering mission from our computer security mission: we should break up the NSA. The agency’s mission should be limited to nation state espionage. Individual investigation should be part of the FBI, cyberwar capabilities should be within US Cyber Command, and critical infrastructure defense should be part of DHS’s mission.

    I doubt we’re going to see any congressional investigations this year, but we’re going to have to figure this out eventually. In my 2014 book Data and Goliath, I write that “no matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security by fixing almost all the vulnerabilities we find…” Our nation’s cybersecurity is just too important to let the NSA sacrifice it in order to gain a fleeting advantage over a foreign adversary.

    This essay previously appeared on Vox.com.

    EDITED TO ADD (8/27): The vulnerabilities were seen in the wild within 24 hours, demonstrating how important they were to disclose and patch.

    James Bamford thinks this is the work of an insider. I disagree, but he’s right that the TAO catalog was not a Snowden document.

    People are looking at the quality of the code. It’s not that good.

    A bit more on firearms in the US

    Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2015/06/a-bit-more-on-firearms-in-us.html

    This is the fifth article in a short series about Poland, Europe, and the United States. To explore the entire series, start here.

    Perhaps not surprisingly, my previous blog post sparked several interesting discussions with my Polish friends who took a more decisive view of the social costs of firearm ownership, or who saw the Second Amendment as a barbaric construct with no place in today’s world. Their opinions reminded me of my own attitude some ten years ago; in this brief follow-up, I wanted to share several data points that convinced me to take a more measured stance.

    Let’s start with the basics: most estimates place the number of guns in the United States at 300 to 350 million – that’s roughly one firearm per every single resident. In Gallup polls, some 40-50% of all households report having a gun, frequently more than one. The demographics of firearm ownership are more uniform than stereotypes may imply; there is some variance across regions, political affiliations, and genders – but for most part, it tends to fall within fairly narrow bands.

    An overwhelming majority of gun owners cite personal safety as the leading motive for purchasing a firearm; hunting and recreation activities come strong second. The defensive aspect of firearm ownership is of special note, because it can potentially provide a very compelling argument for protecting the right to bear arms even if it’s a socially unwelcome practice, or if it comes at an elevated cost to the nation as a whole.

    The self-defense argument is sometimes dismissed as pure fantasy, with many eminent pundits citing one questionable statistic to support this view: the fairly low number of justifiable homicides in the country. Despite its strong appeal to ideologues, the metric does not stand up to scrutiny: all available data implies that most encounters where a gun is pulled by a would-be victim will not end with the assailant getting killed; it’s overwhelmingly more likely that the bad guy would hastily retreat, be detained at gunpoint, or suffer non-fatal injuries. In fact, even in the unlikely case that a firearm is actually discharged with the intent to kill or maim, somewhere around 70-80% of victims survive.

    In reality, we have no single, elegant, and reliable source of data about the frequency with which firearms are used to deter threats; the results of scientific polls probably offer the most comprehensive view, but are open to interpretation and their results vary significantly depending on sampling methods and questions asked. That said, a recent meta-analysis from Centers for Disease Control and Prevention provided some general bounds:


    “Defensive use of guns by crime victims is a common occurrence, although the exact number remains disputed (Cook and Ludwig, 1996; Kleck, 2001a). Almost all national survey estimates indicate that defensive gun uses by victims are at least as common as offensive uses by criminals, with estimates of annual uses ranging from about 500,000 to more than 3 million.”

    An earlier but probably similarly unbiased estimate from US Dept of Justice puts the number at approximately 1.5 million uses a year.

    The CDC study also goes on to say:


    “A different issue is whether defensive uses of guns, however numerous or rare they may be, are effective in preventing injury to the gun-wielding crime victim. Studies that directly assessed the effect of actual defensive uses of guns (i.e., incidents in which a gun was “used” by the crime victim in the sense of attacking or threatening an offender) have found consistently lower injury rates among gun-using crime victims compared with victims who used other self-protective strategies.”

    An argument can be made that the availability of firearms translates to higher rates of violent crime, thus elevating the likelihood of encounters where a defensive firearm would be useful – feeding into an endless cycle of escalating violence. That said, such an effect does not seem to be particularly evident. For example, the United States comes out reasonably well in statistics related to assault, rape, and robbery; on these fronts, America looks less violent than the UK or a bunch of other OECD countries with low firearm ownership rates.

    But there is an exception: one area where the United States clearly falls behind other highly developed nations are homicides. The per-capita figures are almost three times as high as in much of the European Union. And indeed, the bulk of intentional homicides – some 11 thousand deaths a year – trace back to firearms.

    We tend to instinctively draw a connection to guns, but the origins of this tragic situation may be more elusive than they appear. For one, non-gun-related homicides happen in the US at a higher rate than in many other countries, too; Americans just seem to be generally more keen on killing each other than people in places such as Europe, Australia, or Canada. In addition, no convincing pattern emerges when comparing overall homicide rates across states with permissive and restrictive gun ownership laws. Some of the lowest per-capita homicide figures can be found in extremely gun-friendly states such as Idaho, Utah, or Vermont; whereas highly-regulated Washington D.C., Maryland, Illinois, and California all rank pretty high. There is, however, fairly strong correlation between gun and non-gun homicide rates across the country – suggesting that common factors such as population density, urban poverty, and drug-related gang activities play a far more significant role in violent crime than the ease of legally acquiring a firearm. It’s tragic but worth noting that a strikingly disproportionate percentage of homicides involves both victims and perpetrators that belong to socially disadvantaged and impoverished minorities. Another striking pattern is that up to about a half of all gun murders are related to or committed under the influence of illicit drugs.

    Now, international comparisons show general correlation between gun ownership and some types of crime, but it’s difficult to draw solid conclusions from that: there are countless other ways to explain why crime rates may be low in the wealthy European states, and high in Venezuela, Mexico, Honduras, or South Africa; compensating for these factors is theoretically possible, but requires making far-fetched assumptions that are hopelessly vulnerable to researcher bias. Comparing European countries is easier, but yields inconclusive results: gun ownership in Poland is almost twenty times lower than in neighboring Germany and ten times lower than in Czech Republic – but you certainly wouldn’t able to tell that from national crime stats.

    When it comes to gun control, one CDC study on the topic concluded with:


    “The Task Force found insufficient evidence to determine the effectiveness of any of the firearms laws or combinations of laws reviewed on violent outcomes.”

    This does not imply that such approaches are necessarily ineffective; for example, it seems pretty reasonable to assume that well-designed background checks or modest waiting periods do save lives. Similarly, safe storage requirements would likely prevent dozens of child deaths every year, at the cost of rendering firearms less available for home defense. But for the hundreds of sometimes far-fetched gun control proposals introduced every year on federal and state level, emotions often take place of real data, poisoning the debate around gun laws and ultimately bringing little or no public benefit. The heated assault weapon debate is one such red herring: although modern semi-automatic rifles look sinister, they are far more common in movies than on the streets; in reality, all kinds of rifles account only for somewhere around 4% of firearm homicides, and AR-15s are only a tiny fraction of that – likely claiming about as many lives as hammers, ladders, or swimming pools. The efforts to close the “gun show loophole” seem fairly sensible at the surface, too, but are of similarly uncertain merit; instead of gun shows, criminals depend on friends, family, and on more than 200,000 guns that stolen from their rightful owners every year. When breaking into a random home yields a 40-50% chance of scoring a firearm, it’s not hard to see why.

    Another oddball example of simplistic legislative zeal are the attempts to mandate costly gun owner liability insurance, based on drawing an impassioned but flawed parallel between firearms and cars; what undermines this argument is that car accidents are commonplace, while gun handling mishaps – especially ones that injure others – are rare. We also have proposals to institute $100 ammunition purchase permits, to prohibit ammo sales over the Internet, or to impose a hefty per-bullet tax. Many critics feel that such laws seem to be geared not toward addressing any specific dangers, but toward making firearms more expensive and burdensome to own – slowly eroding the constitutional rights of the less wealthy folks. They also see hypocrisy in the common practice of making retired police officers and many high-ranking government officials exempt from said laws.

    Regardless of individual merits of the regulations, it’s certainly true that with countless pieces of sometimes obtuse and poorly-written federal, state, and municipal statutes introduced every year, it’s increasingly easy for people to unintentionally run afoul of the rules. In California, the law as written today implies that any legal permanent resident in good standing can own a gun, but that only US citizens can transport it by car. Given that Californians are also generally barred from carrying firearms on foot in many populated areas, non-citizen residents are seemingly expected to teleport between the gun store, their home, and the shooting range. With many laws hastily drafted in the days after mass shootings and other tragedies, such gems are commonplace. The federal Gun-Free School Zones Act imposes special restrictions on gun ownership within 1,000 feet of a school and slaps harsh penalties for as little carrying it in an unlocked container from one’s home to a car parked in the driveway. In many urban areas, a lot of people either live within such a school zone or can’t conceivably avoid it when going about their business; GFSZA violations are almost certainly common and are policed only selectively.

    Meanwhile, with sharp declines in crime continuing for the past 20 years, the public opinion is increasingly in favor of broad, reasonably policed gun ownership; for example, more than 70% respondents to one Gallup poll are against the restrictive handgun bans of the sort attempted in Chicago, San Francisco, or Washington D.C.; and in a recent Rasmussen poll, only 22% say that they would feel safer in a neighborhood where people are not allowed to keep guns. In fact, responding to the media’s undue obsession with random of acts of violence against law-abiding citizens, and worried about the historically very anti-gun views of the sitting president, Americans are buying a lot more firearms than ever before. Even the National Rifle Association – a staunchly conservative organization vilified by gun control advocates and mainstream pundits – enjoys a pretty reasonable approval rating across many demographics: 58% overall and 78% in households with a gun.

    And here’s the kicker: despite its reputation for being a political arm of firearm manufacturers, the NRA is funded largely through individual memberships, small-scale donations, and purchase round-ups; organizational donations add up to about 5% of their budget – and if you throw in advertising income, the total still stays under 15%. That makes it quite unlike most of the other large-scale lobbying groups that Democrats aren’t as keen on naming-and-shaming on the campaign trail. The NRA’s financial muscle is also frequently overstated; it doesn’t even make it onto the list of top 100 lobbyists in Washington – and gun control advocacy groups, backed by activist billionaires such as Michael Bloomberg, now frequently outspend the pro-gun crowd. Of course, it would be better for the association’s socially conservative and unnecessarily polarizing rhetoric – sometimes veering onto the topics of abortion or video games – to be offset by the voice of other, more liberal groups. But ironically, organizations such as American Civil Liberties Union – well-known for fearlessly defending controversial speech – prefer to avoid the Second Amendment; they do so not because the latter concept has lesser constitutional standing, but because supporting it would not sit well with their own, progressive support base.

    America’s attitude toward guns is a choice, not a necessity. It is also true that gun violence is a devastating problem; and that the emotional horror and lasting social impact of incidents such as school shootings can’t be possibly captured in any cold, dry statistic alone. But there is also nuance and reason to the gun control debate that can be hard to see for newcomers from more firearm-averse parts of the world.

    For the next article in the series, click here. Alternatively, if you prefer to keep reading about firearms, go here for an overview of the gun control debate in the US.