Tag Archives: DMCA

The DMCA and its Chilling Effects on Research

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/the_dmca_and_it.html

The Center for Democracy and Technology has a good summary of the current state of the DMCA’s chilling effects on security research.

To underline the nature of chilling effects on hacking and security research, CDT has worked to describe how tinkerers, hackers, and security researchers of all types both contribute to a baseline level of security in our digital environment and, in turn, are shaped themselves by this environment, most notably when things they do upset others and result in threats, potential lawsuits, and prosecution. We’ve published two reports (sponsored by the Hewlett Foundation and MacArthur Foundation) about needed reforms to the law and the myriad of ways that security research directly improves people’s lives. To get a more complete picture, we wanted to talk to security researchers themselves and gauge the forces that shape their work; essentially, we wanted to “take the pulse” of the security research community.

Today, we are releasing a third report in service of this effort: “Taking the Pulse of Hacking: A Risk Basis for Security Research.” We report findings after having interviewed a set of 20 security researchers and hackers — half academic and half non-academic — about what considerations they take into account when starting new projects or engaging in new work, as well as to what extent they or their colleagues have faced threats in the past that chilled their work. The results in our report show that a wide variety of constraints shape the work they do, from technical constraints to ethical boundaries to legal concerns, including the DMCA and especially the CFAA.

Note: I am a signatory on the letter supporting unrestricted security research.

Security and the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/security_and_th.html

Last year, on October 21, your digital video recorder ­- or at least a DVR like yours ­- knocked Twitter off the internet. Someone used your DVR, along with millions of insecure webcams, routers, and other connected devices, to launch an attack that started a chain reaction, resulting in Twitter, Reddit, Netflix, and many sites going off the internet. You probably didn’t realize that your DVR had that kind of power. But it does.

All computers are hackable. This has as much to do with the computer market as it does with the technologies. We prefer our software full of features and inexpensive, at the expense of security and reliability. That your computer can affect the security of Twitter is a market failure. The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster.

In this article I want to outline the problems, both technical and political, and point to some regulatory solutions. Regulation might be a dirty word in today’s political climate, but security is the exception to our small-government bias. And as the threats posed by computers become greater and more catastrophic, regulation will be inevitable. So now’s the time to start thinking about it.

We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized.

If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much.

**********

We no longer have things with computers embedded in them. We have computers with things attached to them.

Your modern refrigerator is a computer that keeps things cold. Your oven, similarly, is a computer that makes things hot. An ATM is a computer with money inside. Your car is no longer a mechanical device with some computers inside; it’s a computer with four wheels and an engine. Actually, it’s a distributed system of over 100 computers with four wheels and an engine. And, of course, your phones became full-power general-purpose computers in 2007, when the iPhone was introduced.

We wear computers: fitness trackers and computer-enabled medical devices ­- and, of course, we carry our smartphones everywhere. Our homes have smart thermostats, smart appliances, smart door locks, even smart light bulbs. At work, many of those same smart devices are networked together with CCTV cameras, sensors that detect customer movements, and everything else. Cities are starting to embed smart sensors in roads, streetlights, and sidewalk squares, also smart energy grids and smart transportation networks. A nuclear power plant is really just a computer that produces electricity, and ­- like everything else we’ve just listed -­ it’s on the internet.

The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.

Broadly speaking, the Internet of Things has three parts. There are the sensors that collect data about us and our environment: smart thermostats, street and highway sensors, and those ubiquitous smartphones with their motion sensors and GPS location receivers. Then there are the “smarts” that figure out what the data means and what to do about it. This includes all the computer processors on these devices and ­- increasingly ­- in the cloud, as well as the memory that stores all of this information. And finally, there are the actuators that affect our environment. The point of a smart thermostat isn’t to record the temperature; it’s to control the furnace and the air conditioner. Driverless cars collect data about the road and the environment to steer themselves safely to their destinations.

You can think of the sensors as the eyes and ears of the internet. You can think of the actuators as the hands and feet of the internet. And you can think of the stuff in the middle as the brain. We are building an internet that senses, thinks, and acts.

This is the classic definition of a robot. We’re building a world-size robot, and we don’t even realize it.

To be sure, it’s not a robot in the classical sense. We think of robots as discrete autonomous entities, with sensors, brain, and actuators all together in a metal shell. The world-size robot is distributed. It doesn’t have a singular body, and parts of it are controlled in different ways by different people. It doesn’t have a central brain, and it has nothing even remotely resembling a consciousness. It doesn’t have a single goal or focus. It’s not even something we deliberately designed. It’s something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world.

This world-size robot is actually more than the Internet of Things. It’s a combination of several decades-old computing trends: mobile computing, cloud computing, always-on computing, huge databases of personal information, the Internet of Things ­- or, more precisely, cyber-physical systems ­- autonomy, and artificial intelligence. And while it’s still not very smart, it’ll get smarter. It’ll get more powerful and more capable through all the interconnections we’re building.

It’ll also get much more dangerous.

**********

Computer security has been around for almost as long as computers have been. And while it’s true that security wasn’t part of the design of the original internet, it’s something we have been trying to achieve since its beginning.

I have been working in computer security for over 30 years: first in cryptography, then more generally in computer and network security, and now in general security technology. I have watched computers become ubiquitous, and have seen firsthand the problems ­- and solutions ­- of securing these complex machines and systems. I’m telling you all this because what used to be a specialized area of expertise now affects everything. Computer security is now everything security. There’s one critical difference, though: The threats have become greater.

Traditionally, computer security is divided into three categories: confidentiality, integrity, and availability. For the most part, our security concerns have largely centered around confidentiality. We’re concerned about our data and who has access to it ­- the world of privacy and surveillance, of data theft and misuse.

But threats come in many forms. Availability threats: computer viruses that delete our data, or ransomware that encrypts our data and demands payment for the unlock key. Integrity threats: hackers who can manipulate data entries can do things ranging from changing grades in a class to changing the amount of money in bank accounts. Some of these threats are pretty bad. Hospitals have paid tens of thousands of dollars to criminals whose ransomware encrypted critical medical files. JPMorgan Chase spends half a billion on cybersecurity a year.

Today, the integrity and availability threats are much worse than the confidentiality threats. Once computers start affecting the world in a direct and physical manner, there are real risks to life and property. There is a fundamental difference between crashing your computer and losing your spreadsheet data, and crashing your pacemaker and losing your life. This isn’t hyperbole; recently researchers found serious security vulnerabilities in St. Jude Medical’s implantable heart devices. Give the internet hands and feet, and it will have the ability to punch and kick.

Take a concrete example: modern cars, those computers on wheels. The steering wheel no longer turns the axles, nor does the accelerator pedal change the speed. Every move you make in a car is processed by a computer, which does the actual controlling. A central computer controls the dashboard. There’s another in the radio. The engine has 20 or so computers. These are all networked, and increasingly autonomous.

Now, let’s start listing the security threats. We don’t want car navigation systems to be used for mass surveillance, or the microphone for mass eavesdropping. We might want it to be used to determine a car’s location in the event of a 911 call, and possibly to collect information about highway congestion. We don’t want people to hack their own cars to bypass emissions-control limitations. We don’t want manufacturers or dealers to be able to do that, either, as Volkswagen did for years. We can imagine wanting to give police the ability to remotely and safely disable a moving car; that would make high-speed chases a thing of the past. But we definitely don’t want hackers to be able to do that. We definitely don’t want them disabling the brakes in every car without warning, at speed. As we make the transition from driver-controlled cars to cars with various driver-assist capabilities to fully driverless cars, we don’t want any of those critical components subverted. We don’t want someone to be able to accidentally crash your car, let alone do it on purpose. And equally, we don’t want them to be able to manipulate the navigation software to change your route, or the door-lock controls to prevent you from opening the door. I could go on.

That’s a lot of different security requirements, and the effects of getting them wrong range from illegal surveillance to extortion by ransomware to mass death.

**********

Our computers and smartphones are as secure as they are because companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support large, dedicated teams because those companies make a huge amount of money, either directly or indirectly, from their software ­ and, in part, compete on its security. Unfortunately, this isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

At a recent hacker conference, a security researcher analyzed 30 home routers and was able to break into half of them, including some of the most popular and common brands. The denial-of-service attacks that forced popular websites like Reddit and Twitter off the internet last October were enabled by vulnerabilities in devices like webcams and digital video recorders. In August, two security researchers demonstrated a ransomware attack on a smart thermostat.

Even worse, most of these devices don’t have any way to be patched. Companies like Microsoft and Apple continuously deliver security patches to your computers. Some home routers are technically patchable, but in a complicated way that only an expert would attempt. And the only way for you to update the firmware in your hackable DVR is to throw it away and buy a new one.

The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

**********

Security is an arms race between attacker and defender. Technology perturbs that arms race by changing the balance between attacker and defender. Understanding how this arms race has unfolded on the internet is essential to understanding why the world-size robot we’re building is so insecure, and how we might secure it. To that end, I have five truisms, born from what we’ve already learned about computer and internet security. They will soon affect the security arms race everywhere.

Truism No. 1: On the internet, attack is easier than defense.

There are many reasons for this, but the most important is the complexity of these systems. More complexity means more people involved, more parts, more interactions, more mistakes in the design and development process, more of everything where hidden insecurities can be found. Computer-security experts like to speak about the attack surface of a system: all the possible points an attacker might target and that must be secured. A complex system means a large attack surface. The defender has to secure the entire attack surface. The attacker just has to find one vulnerability ­- one unsecured avenue for attack -­ and gets to choose how and when to attack. It’s simply not a fair battle.

There are other, more general, reasons why attack is easier than defense. Attackers have a natural agility that defenders often lack. They don’t have to worry about laws, and often not about morals or ethics. They don’t have a bureaucracy to contend with, and can more quickly make use of technical innovations. Attackers also have a first-mover advantage. As a society, we’re generally terrible at proactive security; we rarely take preventive security measures until an attack actually happens. So more advantages go to the attacker.

Truism No. 2: Most software is poorly written and insecure.

If complexity isn’t enough, we compound the problem by producing lousy software. Well-written software, like the kind found in airplane avionics, is both expensive and time-consuming to produce. We don’t want that. For the most part, poorly written software has been good enough. We’d all rather live with buggy software than pay the prices good software would require. We don’t mind if our games crash regularly, or our business applications act weird once in a while. Because software has been largely benign, it hasn’t mattered. This has permeated the industry at all levels. At universities, we don’t teach how to code well. Companies don’t reward quality code in the same way they reward fast and cheap. And we consumers don’t demand it.

But poorly written software is riddled with bugs, sometimes as many as one per 1,000 lines of code. Some of them are inherent in the complexity of the software, but most are programming mistakes. Not all bugs are vulnerabilities, but some are.

Truism No. 3: Connecting everything to each other via the internet will expose new vulnerabilities.

The more we network things together, the more vulnerabilities on one thing will affect other things. On October 21, vulnerabilities in a wide variety of embedded devices were all harnessed together to create what hackers call a botnet. This botnet was used to launch a distributed denial-of-service attack against a company called Dyn. Dyn provided a critical internet function for many major internet sites. So when Dyn went down, so did all those popular websites.

These chains of vulnerabilities are everywhere. In 2012, journalist Mat Honan suffered a massive personal hack because of one of them. A vulnerability in his Amazon account allowed hackers to get into his Apple account, which allowed them to get into his Gmail account. And in 2013, the Target Corporation was hacked by someone stealing credentials from its HVAC contractor.

Vulnerabilities like these are particularly hard to fix, because no one system might actually be at fault. It might be the insecure interaction of two individually secure systems.

Truism No. 4: Everybody has to stop the best attackers in the world.

One of the most powerful properties of the internet is that it allows things to scale. This is true for our ability to access data or control systems or do any of the cool things we use the internet for, but it’s also true for attacks. In general, fewer attackers can do more damage because of better technology. It’s not just that these modern attackers are more efficient, it’s that the internet allows attacks to scale to a degree impossible without computers and networks.

This is fundamentally different from what we’re used to. When securing my home against burglars, I am only worried about the burglars who live close enough to my home to consider robbing me. The internet is different. When I think about the security of my network, I have to be concerned about the best attacker possible, because he’s the one who’s going to create the attack tool that everyone else will use. The attacker that discovered the vulnerability used to attack Dyn released the code to the world, and within a week there were a dozen attack tools using it.

Truism No. 5: Laws inhibit security research.

The Digital Millennium Copyright Act is a terrible law that fails at its purpose of preventing widespread piracy of movies and music. To make matters worse, it contains a provision that has critical side effects. According to the law, it is a crime to bypass security mechanisms that protect copyrighted work, even if that bypassing would otherwise be legal. Since all software can be copyrighted, it is arguably illegal to do security research on these devices and to publish the result.

Although the exact contours of the law are arguable, many companies are using this provision of the DMCA to threaten researchers who expose vulnerabilities in their embedded systems. This instills fear in researchers, and has a chilling effect on research, which means two things: (1) Vendors of these devices are more likely to leave them insecure, because no one will notice and they won’t be penalized in the market, and (2) security engineers don’t learn how to do security better.
Unfortunately, companies generally like the DMCA. The provisions against reverse-engineering spare them the embarrassment of having their shoddy security exposed. It also allows them to build proprietary systems that lock out competition. (This is an important one. Right now, your toaster cannot force you to only buy a particular brand of bread. But because of this law and an embedded computer, your Keurig coffee maker can force you to buy a particular brand of coffee.)

**********
In general, there are two basic paradigms of security. We can either try to secure something well the first time, or we can make our security agile. The first paradigm comes from the world of dangerous things: from planes, medical devices, buildings. It’s the paradigm that gives us secure design and secure engineering, security testing and certifications, professional licensing, detailed preplanning and complex government approvals, and long times-to-market. It’s security for a world where getting it right is paramount because getting it wrong means people dying.

The second paradigm comes from the fast-moving and heretofore largely benign world of software. In this paradigm, we have rapid prototyping, on-the-fly updates, and continual improvement. In this paradigm, new vulnerabilities are discovered all the time and security disasters regularly happen. Here, we stress survivability, recoverability, mitigation, adaptability, and muddling through. This is security for a world where getting it wrong is okay, as long as you can respond fast enough.

These two worlds are colliding. They’re colliding in our cars -­ literally -­ in our medical devices, our building control systems, our traffic control systems, and our voting machines. And although these paradigms are wildly different and largely incompatible, we need to figure out how to make them work together.

So far, we haven’t done very well. We still largely rely on the first paradigm for the dangerous computers in cars, airplanes, and medical devices. As a result, there are medical systems that can’t have security patches installed because that would invalidate their government approval. In 2015, Chrysler recalled 1.4 million cars to fix a software vulnerability. In September 2016, Tesla remotely sent a security patch to all of its Model S cars overnight. Tesla sure sounds like it’s doing things right, but what vulnerabilities does this remote patch feature open up?

**********
Until now we’ve largely left computer security to the market. Because the computer and network products we buy and use are so lousy, an enormous after-market industry in computer security has emerged. Governments, companies, and people buy the security they think they need to secure themselves. We’ve muddled through well enough, but the market failures inherent in trying to secure this world-size robot will soon become too big to ignore.

Markets alone can’t solve our security problems. Markets are motivated by profit and short-term goals at the expense of society. They can’t solve collective-action problems. They won’t be able to deal with economic externalities, like the vulnerabilities in DVRs that resulted in Twitter going offline. And we need a counterbalancing force to corporate power.

This all points to policy. While the details of any computer-security system are technical, getting the technologies broadly deployed is a problem that spans law, economics, psychology, and sociology. And getting the policy right is just as important as getting the technology right because, for internet security to work, law and technology have to work together. This is probably the most important lesson of Edward Snowden’s NSA disclosures. We already knew that technology can subvert law. Snowden demonstrated that law can also subvert technology. Both fail unless each work. It’s not enough to just let technology do its thing.

Any policy changes to secure this world-size robot will mean significant government regulation. I know it’s a sullied concept in today’s world, but I don’t see any other possible solution. It’s going to be especially difficult on the internet, where its permissionless nature is one of the best things about it and the underpinning of its most world-changing innovations. But I don’t see how that can continue when the internet can affect the world in a direct and physical manner.

**********

I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out.

We have a practical problem when it comes to internet regulation. There’s no government structure to tackle this at a systemic level. Instead, there’s a fundamental mismatch between the way government works and the way this technology works that makes dealing with this problem impossible at the moment.

Government operates in silos. In the U.S., the FAA regulates aircraft. The NHTSA regulates cars. The FDA regulates medical devices. The FCC regulates communications devices. The FTC protects consumers in the face of “unfair” or “deceptive” trade practices. Even worse, who regulates data can depend on how it is used. If data is used to influence a voter, it’s the Federal Election Commission’s jurisdiction. If that same data is used to influence a consumer, it’s the FTC’s. Use those same technologies in a school, and the Department of Education is now in charge. Robotics will have its own set of problems, and no one is sure how that is going to be regulated. Each agency has a different approach and different rules. They have no expertise in these new issues, and they are not quick to expand their authority for all sorts of reasons.

Compare that with the internet. The internet is a freewheeling system of integrated objects and networks. It grows horizontally, demolishing old technological barriers so that people and systems that never previously communicated now can. Already, apps on a smartphone can log health information, control your energy use, and communicate with your car. That’s a set of functions that crosses jurisdictions of at least four different government agencies, and it’s only going to get worse.

Our world-size robot needs to be viewed as a single entity with millions of components interacting with each other. Any solutions here need to be holistic. They need to work everywhere, for everything. Whether we’re talking about cars, drones, or phones, they’re all computers.

This has lots of precedent. Many new technologies have led to the formation of new government regulatory agencies. Trains did, cars did, airplanes did. Radio led to the formation of the Federal Radio Commission, which became the FCC. Nuclear power led to the formation of the Atomic Energy Commission, which eventually became the Department of Energy. The reasons were the same in every case. New technologies need new expertise because they bring with them new challenges. Governments need a single agency to house that new expertise, because its applications cut across several preexisting agencies. It’s less that the new agency needs to regulate -­ although that’s often a big part of it -­ and more that governments recognize the importance of the new technologies.

The internet has famously eschewed formal regulation, instead adopting a multi-stakeholder model of academics, businesses, governments, and other interested parties. My hope is that we can keep the best of this approach in any regulatory agency, looking more at the new U.S. Digital Service or the 18F office inside the General Services Administration. Both of those organizations are dedicated to providing digital government services, and both have collected significant expertise by bringing people in from outside of government, and both have learned how to work closely with existing agencies. Any internet regulatory agency will similarly need to engage in a high level of collaborate regulation -­ both a challenge and an opportunity.

I don’t think any of us can predict the totality of the regulations we need to ensure the safety of this world, but here’s a few. We need government to ensure companies follow good security practices: testing, patching, secure defaults -­ and we need to be able to hold companies liable when they fail to do these things. We need government to mandate strong personal data protections, and limitations on data collection and use. We need to ensure that responsible security research is legal and well-funded. We need to enforce transparency in design, some sort of code escrow in case a company goes out of business, and interoperability between devices of different manufacturers, to counterbalance the monopolistic effects of interconnected technologies. Individuals need the right to take their data with them. And internet-enabled devices should retain some minimal functionality if disconnected from the internet

I’m not the only one talking about this. I’ve seen proposals for a National Institutes of Health analog for cybersecurity. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission. I think it needs to be broader: maybe a Department of Technology Policy.

Of course there will be problems. There’s a lack of expertise in these issues inside government. There’s a lack of willingness in government to do the hard regulatory work. Industry is worried about any new bureaucracy: both that it will stifle innovation by regulating too much and that it will be captured by industry and regulate too little. A domestic regulatory agency will have to deal with the fundamentally international nature of the problem.

But government is the entity we use to solve problems like this. Governments have the scope, scale, and balance of interests to address the problems. It’s the institution we’ve built to adjudicate competing social interests and internalize market externalities. Left to their own devices, the market simply can’t. That we’re currently in the middle of an era of low government trust, where many of us can’t imagine government doing anything positive in an area like this, is to our detriment.

Here’s the thing: Governments will get involved, regardless. The risks are too great, and the stakes are too high. Government already regulates dangerous physical systems like cars and medical devices. And nothing motivates the U.S. government like fear. Remember 2001? A nominally small-government Republican president created the Office of Homeland Security 11 days after the terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for over a decade. A fatal disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important, and complex; and they’re coming. We can’t afford to ignore these issues until it’s too late.

We also need to start disconnecting systems. If we cannot secure complex systems to the level required by their real-world capabilities, then we must not build a world where everything is computerized and interconnected.

There are other models. We can enable local communications only. We can set limits on collected and stored data. We can deliberately design systems that don’t interoperate with each other. We can deliberately fetter devices, reversing the current trend of turning everything into a general-purpose computer. And, most important, we can move toward less centralization and more distributed systems, which is how the internet was first envisioned.

This might be a heresy in today’s race to network everything, but large, centralized systems are not inevitable. The technical elites are pushing us in that direction, but they really don’t have any good supporting arguments other than the profits of their ever-growing multinational corporations.

But this will change. It will change not only because of security concerns, it will also change because of political concerns. We’re starting to chafe under the worldview of everything producing data about us and what we do, and that data being available to both governments and corporations. Surveillance capitalism won’t be the business model of the internet forever. We need to change the fabric of the internet so that evil governments don’t have the tools to create a horrific totalitarian state. And while good laws and regulations in Western democracies are a great second line of defense, they can’t be our only line of defense.

My guess is that we will soon reach a high-water mark of computerization and connectivity, and that afterward we will make conscious decisions about what and how we decide to interconnect. But we’re still in the honeymoon phase of connectivity. Governments and corporations are punch-drunk on our data, and the rush to connect everything is driven by an even greater desire for power and market share. One of the presentations released by Edward Snowden contained the NSA mantra: “Collect it all.” A similar mantra for the internet today might be: “Connect it all.”

The inevitable backlash will not be driven by the market. It will be deliberate policy decisions that put the safety and welfare of society above individual corporations and industries. It will be deliberate policy decisions that prioritize the security of our systems over the demands of the FBI to weaken them in order to make their law-enforcement jobs easier. It’ll be hard policy for many to swallow, but our safety will depend on it.

**********

The scenarios I’ve outlined, both the technological and economic trends that are causing them and the political changes we need to make to start to fix them, come from my years of working in internet-security technology and policy. All of this is informed by an understanding of both technology and policy. That turns out to be critical, and there aren’t enough people who understand both.

This brings me to my final plea: We need more public-interest technologists.

Over the past couple of decades, we’ve seen examples of getting internet-security policy badly wrong. I’m thinking of the FBI’s “going dark” debate about its insistence that computer devices be designed to facilitate government access, the “vulnerability equities process” about when the government should disclose and fix a vulnerability versus when it should use it to attack other systems, the debacle over paperless touch-screen voting machines, and the DMCA that I discussed above. If you watched any of these policy debates unfold, you saw policy-makers and technologists talking past each other.

Our world-size robot will exacerbate these problems. The historical divide between Washington and Silicon Valley -­ the mistrust of governments by tech companies and the mistrust of tech companies by governments ­- is dangerous.

We have to fix this. Getting IoT security right depends on the two sides working together and, even more important, having people who are experts in each working on both. We need technologists to get involved in policy, and we need policy-makers to get involved in technology. We need people who are experts in making both technology and technological policy. We need technologists on congressional staffs, inside federal agencies, working for NGOs, and as part of the press. We need to create a viable career path for public-interest technologists, much as there already is one for public-interest attorneys. We need courses, and degree programs in colleges, for people interested in careers in public-interest technology. We need fellowships in organizations that need these people. We need technology companies to offer sabbaticals for technologists wanting to go down this path. We need an entire ecosystem that supports people bridging the gap between technology and law. We need a viable career path that ensures that even though people in this field won’t make as much as they would in a high-tech start-up, they will have viable careers. The security of our computerized and networked future ­ meaning the security of ourselves, families, homes, businesses, and communities ­ depends on it.

This plea is bigger than security, actually. Pretty much all of the major policy debates of this century will have a major technological component. Whether it’s weapons of mass destruction, robots drastically affecting employment, climate change, food safety, or the increasing ubiquity of ever-shrinking drones, understanding the policy means understanding the technology. Our society desperately needs technologists working on the policy. The alternative is bad policy.

**********

The world-size robot is less designed than created. It’s coming without any forethought or architecting or planning; most of us are completely unaware of what we’re building. In fact, I am not convinced we can actually design any of this. When we try to design complex sociotechnical systems like this, we are regularly surprised by their emergent properties. The best we can do is observe and channel these properties as best we can.

Market thinking sometimes makes us lose sight of the human choices and autonomy at stake. Before we get controlled ­ or killed ­ by the world-size robot, we need to rebuild confidence in our collective governance institutions. Law and policy may not seem as cool as digital tech, but they’re also places of critical innovation. They’re where we collectively bring about the world we want to live in.

While I might sound like a Cassandra, I’m actually optimistic about our future. Our society has tackled bigger problems than this one. It takes work and it’s not easy, but we eventually find our way clear to make the hard choices necessary to solve our real problems.

The world-size robot we’re building can only be managed responsibly if we start making real choices about the interconnected world we live in. Yes, we need security systems as robust as the threat landscape. But we also need laws that effectively regulate these dangerous technologies. And, more generally, we need to make moral, ethical, and political decisions on how those systems should work. Until now, we’ve largely left the internet alone. We gave programmers a special right to code cyberspace as they saw fit. This was okay because cyberspace was separate and relatively unimportant: That is, it didn’t matter. Now that that’s changed, we can no longer give programmers and the companies they work for this power. Those moral, ethical, and political decisions need, somehow, to be made by everybody. We need to link people with the same zeal that we are currently linking machines. “Connect it all” must be countered with “connect us all.”

This essay previously appeared in New York Magazine.

Research into IoT Security Is Finally Legal

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/research_into_i.html

For years, the DMCA has been used to stifle legitimate research into the security of embedded systems. Finally, the research exemption to the DMCA is in effect (for two years, but we can hope it’ll be extended forever).

Notes on that StJude/MuddyWatters/MedSec thing

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/notes-on-that-stjudemuddywattersmedsec.html

I thought I’d write up some notes on the StJude/MedSec/MuddyWaters affair. Some references: [1] [2] [3] [4].

The story so far

tl;dr: hackers drop 0day on medical device company hoping to profit by shorting their stock

St Jude Medical (STJ) is one of the largest providers of pacemakers (aka. cardiac devices) in the country, around ~$2.5 billion in revenue, which accounts for about half their business. They provide “smart” pacemakers with an on-board computer that talks via radio-waves to a nearby monitor that records the functioning of the device (and health data). That monitor, “[email protected]“, then talks back up to St Jude (via phone lines, 3G cell phone, or wifi). Pretty much all pacemakers work that way (my father’s does, although his is from a different vendor).

MedSec is a bunch of cybersecurity researchers (white-hat hackers) who have been investigating medical devices. In theory, their primary business is to sell their services to medical device companies, to help companies secure their devices. Their CEO is Justine Bone, a long-time white-hat hacker. Despite Muddy Waters garbling the research, there’s no reason to doubt that there’s quality research underlying all this.

Muddy Waters is an investment company known for investigating companies, finding problems like accounting fraud, and profiting by shorting the stock of misbehaving companies.

Apparently, MedSec did a survey of many pacemaker manufacturers, chose the one with the most cybersecurity problems, and went to Muddy Waters with their findings, asking for a share of the profits Muddy Waters got from shorting the stock.

Muddy Waters published their findings in [1] above. St Jude published their response in [2] above. They are both highly dishonest. I point that out because people want to discuss the ethics of using 0day to short stock when we should talk about the ethics of lying.

“Why you should sell the stock” [finance issues]

In this section, I try to briefly summarize Muddy Water’s argument why St Jude’s stock will drop. I’m not an expert in this area (though I do a bunch of investment), but they do seem flimsy to me.
Muddy Water’s argument is that these pacemakers are half of St Jude’s business, and that fixing them will first require recalling them all, then take another 2 year to fix, during which time they can’t be selling pacemakers. Much of the Muddy Waters paper is taken up explaining this, citing similar medical cases, and so on.
If at all true, and if the cybersecurity claims hold up, then yes, this would be good reason to short the stock. However, I suspect they aren’t true — and they are simply trying to scare people about long-term consequences allowing Muddy Waters to profit in the short term.
@selenakyle on Twitter suggests this interest document [4] about market-solutions to vuln-disclosure, if you are interested in this angle of things.
Update from @lippard: Abbot Labs agreed in April to buy St Jude at $85 a share (when St Jude’s stock was $60/share). Presumable, for this Muddy Waters attack on St Jude’s stock price to profit from anything more than a really short term stock drop (like dumping their short position today), Muddy Waters would have believe this effort will cause Abbot Labs to walk away from the deal. Normally, there are penalties for doing so, but material things like massive vulnerabilities in a product should allow Abbot Labs to walk away without penalties.

The 0day being dropped

Well, they didn’t actually drop 0day as such, just claims that 0day exists — that it’s been “demonstrated”. Reading through their document a few times, I’ve created a list of the 0day they found, to the granularity that one would expect from CVE numbers (CVE is group within the Department of Homeland security that assigns standard reference numbers to discovered vulnerabilities).

The first two, which can kill somebody, are the salient ones. The others are more normal cybersecurity issues, and may be of concern because they can leak HIPAA-protected info.

CVE-2016-xxxx: Pacemaker can be crashed, leading to death
Within a reasonable distance (under 50 feet) over several hours, pounding the pacemaker with malformed packets (either from an SDR or a hacked version of the [email protected] monitor), the pacemaker can crash. Sometimes such crashes will brick the device, other times put it into a state that may kill the patient by zapping the heart too quickly.

CVE-2016-xxxx: Pacemaker power can be drained, leading to death
Within a reasonable distance (under 50 feet) over several days, the pacemaker’s power can slowly be drained at the rate of 3% per hour. While the user will receive a warning from their [email protected] monitoring device that the battery is getting low, it’s possible the battery may be fully depleted before they can get to a doctor for a replacement. A non-functioning pacemaker may lead to death.

CVE-2016-xxxx: Pacemaker uses unauthenticated/unencrypted RF protocol
The above two items are possible because there is no encryption nor authentication in the wireless protocol, allowing any evildoer access to the pacemaker device or the monitoring device.

CVE-2016-xxxx: [email protected] contained hard-coded credentials and SSH keys
The password to connect to the St Jude network is the same for all device, and thus easily reverse engineered.

CVE-2016-xxxx: local proximity wand not required
It’s unclear in the report, but it seems that most other products require a wand in local promixity (inches) in order to enable communication with the pacemaker. This seems like a requirement — otherwise, even with authentication, remote RF would be able to drain the device in the person’s chest.

So these are, as far as I can tell, the explicit bugs they outline. Unfortunately, none are described in detail. I don’t see enough detail for any of these to actually be assigned a CVE number. I’m being generous here, trying to describe them as such, giving them the benefit of the doubt, there’s enough weasel language in there that makes me doubt all of them. Though, if the first two prove not to be reproducible, then there will be a great defamation case, so I presume those two are true.

The movie/TV plot scenarios

So if you wanted to use this as a realistic TV/movie plot, here are two of them.
#1 You (the executive of the acquiring company) are meeting with the CEO and executives of a smaller company you want to buy. It’s a family concern, and the CEO really doesn’t want to sell. But you know his/her children want to sell. Therefore, during the meeting, you pull out your notebook and an SDR device and put it on the conference room table. You start running the exploit to crash that CEO’s pacemaker. It crashes, the CEO grabs his/her chest, who gets carted off the hospital. The children continue negotiations, selling off their company.
#2 You are a hacker in Russia going after a target. After many phishing attempts, you finally break into the home desktop computer. From that computer, you branch out and connect to the [email protected] devices through the hard-coded password. You then run an exploit from the device, using that device’s own radio, to slowly drain the battery from the pacemaker, day after day, while the target sleeps. You patch the software so it no longer warns the user that the battery is getting low. The battery dies, and a few days later while the victim is digging a ditch, s/he falls over dead from heart failure.

The Muddy Water’s document is crap

There are many ethical issues, but the first should be dishonesty and spin of the Muddy Waters research report.

The report is clearly designed to scare other investors to drop St Jude stock price in the short term so that Muddy Waters can profit. It’s not designed to withstand long term scrutiny. It’s full of misleading details and outright lies.

For example, it keeps stressing how shockingly bad the security vulnerabilities are, such as saying:

We find STJ Cardiac Devices’ vulnerabilities orders of magnitude more worrying than the medical device hacks that have been publicly discussed in the past. 

This is factually untrue. St Jude problems are no worse than the 2013 issue where doctors disable the RF capabilities of Dick Cheney’s pacemaker in response to disclosures. They are no worse than that insulin pump hack. Bad cybersecurity is the norm for medical devices. St Jude may be among the worst, but not by an order-of-magnitude.

The term “orders of magnitude” is math, by the way, and means “at least 100 times worse”. As an expert, I claim these problems are not even one order of magnitude (10 times worse). I challenge MedSec’s experts to stand behind the claim that these vulnerabilities are at least 100 times worse than other public medical device hacks.

In many places, the language is wishy-washy. Consider this quote:

Despite having no background in cybersecurity, Muddy Waters has been able to replicate in-house key exploits that help to enable these attacks

The semantic content of this is nil. It says they weren’t able to replicate the attacks themselves. They don’t have sufficient background in cybersecurity to understand what they replicated.

Such language is pervasive throughout the document, things that aren’t technically lies, but which aren’t true, either.

Also pervasive throughout the document, repeatedly interjected for no reason in the middle of text, are statements like this, repeatedly stressing why you should sell the stock:

Regardless, we have little doubt that STJ is about to enter a period of protracted litigation over these products. Should these trials reach verdicts, we expect the courts will hold that STJ has been grossly negligent in its product design. (We estimate awards could total $6.4 billion.15)

I point this out because Muddy Waters obviously doesn’t feel the content of the document stands on its own, so that you can make this conclusion yourself. It instead feels the need to repeat this message over and over on every page.

Muddy Waters violation of Kerckhoff’s Principle

One of the most important principles of cyber security is Kerckhoff’s Principle, that more openness is better. Or, phrased another way, that trying to achieve security through obscurity is bad.

The Muddy Water’s document attempts to violate this principle. Besides the the individual vulnerabilities, it makes the claim that St Jude cybersecurity is inherently bad because it’s open. it uses off-the-shelf chips, standard software (line Linux), and standard protocols. St Jude does nothing to hide or obfuscate these things.

Everyone in cybersecurity would agree this is good. Muddy Waters claims this is bad.

For example, some of their quotes:

One competitor went as far as developing a highly proprietary embedded OS, which is quite costly and rarely seen

In contrast, the other manufacturers have proprietary RF chips developed specifically for their protocols

Again, as the cybersecurity experts in this case, I challenge MedSec to publicly defend Muddy Waters in these claims.

Medical device manufacturers should do the opposite of what Muddy Waters claims. I’ll explain why.

Either your system is secure or it isn’t. If it’s secure, then making the details public won’t hurt you. If it’s insecure, then making the details obscure won’t help you: hackers are far more adept at reverse engineering than you can possibly understand. Making things obscure, though, does stop helpful hackers (i.e. cybersecurity consultants you hire) from making your system secure, since it’s hard figuring out the details.

Said another way: your adversaries (such as me) hate seeing open systems that are obviously secure. We love seeing obscure systems, because we know you couldn’t possibly have validated their security.

The point is this: Muddy Waters is trying to profit from the public’s misconception about cybersecurity, namely that obscurity is good. The actual principle is that obscurity is bad.

St Jude’s response was no better

In response to the Muddy Water’s document, St Jude published this document [2]. It’s equally full of lies — the sort that may deserve a share holder lawsuit. (I see lawsuits galore over this). It says the following:

We have examined the allegations made by Capital and MedSec on August 25, 2016 regarding the safety and security of our pacemakers and defibrillators, and while we would have preferred the opportunity to review a detailed account of the information, based on available information, we conclude that the report is false and misleading.

If that’s true, if they can prove this in court, then that will mean they could win millions in a defamation lawsuit against Muddy Waters, and millions more for stock manipulation.

But it’s almost certainly not true. Without authentication/encryption, then the fact that hackers can crash/drain a pacemaker is pretty obvious, especially since (as claimed by Muddy Waters), they’ve successfully done it. Specifically, the picture on page 17 of the 34 page Muddy Waters document is a smoking gun of a pacemaker misbehaving.

The rest of their document contains weasel-word denials that may be technically true, but which have no meaning.

St. Jude Medical stands behind the security and safety of our devices as confirmed by independent third parties and supported through our regulatory submissions. 

Our software has been evaluated and assessed by several independent organizations and researchers including Deloitte and Optiv.

In 2015, we successfully completed an upgrade to the ISO 27001:2013 certification.

These are all myths of the cybersecurity industry. Conformance with security standards, such as ISO 27001:2013, has absolutely zero bearing on whether you are secure. Having some consultants/white-hat claim your product is secure doesn’t mean other white-hat hackers won’t find an insecurity.

Indeed, having been assessed by Deloitte is a good indicator that something is wrong. It’s not that they are incompetent (they’ve got some smart people working for them), but ultimately the way the security market works is that you demand of such auditors that the find reasons to believe your product is secure, not that they keep hunting until something is found that is insecure. It’s why outsiders, like MedSec, are better, because they strive to find why your product is insecure. The bigger the enemy, the more resources they’ll put into finding a problem.

It’s like after you get a hair cut, your enemies and your friends will have different opinions on your new look. Enemies are more honest.

The most obvious lie from the St Jude response is the following:

The report claimed that the battery could be depleted at a 50-foot range. This is not possible since once the device is implanted into a patient, wireless communication has an approximate 7-foot range. This brings into question the entire testing methodology that has been used as the basis for the Muddy Waters Capital and MedSec report.

That’s not how wireless works. With directional antennas and amplifiers, 7-feet easily becomes 50-feet or more. Even without that, something designed for reliable operation at 7-feet often works less reliably at 50-feet. There’s no cutoff at 7-feet within which it will work, outside of which it won’t.

That St Jude deliberately lies here brings into question their entire rebuttal. (see what I did there?)

ETHICS EHTICS ETHICS

First let’s discuss the ethics of lying, using weasel words, and being deliberately misleading. Both St Jude and Muddy Waters do this, and it’s ethically wrong. I point this out to uninterested readers who want to get at that other ethical issue. Clear violations of ethics we all agree interest nobody — but they ought to. We should be lambasting Muddy Waters for their clear ethical violations, not the unclear one.

So let’s get to the ethical issue everyone wants to discuss:

Is it ethical to profit from shorting stock while dropping 0day.

Let’s discuss some of the issues.

There’s no insider trading. Some people wonder if there are insider trading issues. There aren’t. While it’s true that Muddy Waters knew some secrets that nobody else knew, as long as they weren’t insider secrets, it’s not insider trading. In other words, only insiders know about a key customer contract won or lost recently. But, vulnerabilities researched by outsiders is still outside the company.

Watching a CEO walk into the building of a competitor is still outsider knowledge — you can trade on the likely merger, even though insider employees cannot.

Dropping 0day might kill/harm people. That may be true, but that’s never an ethical reason to not drop it. That’s because it’s not this one event in isolation. If companies knew ethical researchers would never drop an 0day, then they’d never patch it. It’s like the government’s warrantless surveillance of American citizens: the courts won’t let us challenge it, because we can’t prove it exists, and we can’t prove it exists, because the courts allow it to be kept secret, because revealing the surveillance would harm national intelligence. That harm may happen shouldn’t stop the right thing from happening.

In other words, in the long run, dropping this 0day doesn’t necessarily harm people — and thus profiting on it is not an ethical issue. We need incentives to find vulns. This moves the debate from an ethical one to more of a factual debate about the long-term/short-term risk from vuln disclosure.

As MedSec points out, St Jude has already proven itself an untrustworthy consumer of vulnerability disclosures. When that happens, the dropping 0day is ethically permissible for “responsible disclosure”. Indeed, that St Jude then lied about it in their response ex post facto justifies the dropping of the 0day.

No 0day was actually dropped here. In this case, what was dropped was claims of 0day. This may be good or bad, depending on your arguments. It’s good that the vendor will have some extra time to fix the problems before hackers can start exploiting them. It’s bad because we can’t properly evaluate the true impact of the 0day unless we get more detail — allowing Muddy Waters to exaggerate and mislead people in order to move the stock more than is warranted.

In other words, the lack of actual 0day here is the problem — actual 0day would’ve been better.

This 0day is not necessarily harmful. Okay, it is harmful, but it requires close proximity. It’s not as if the hacker can reach out from across the world and kill everyone (barring my movie-plot section above). If you are within 50 feet of somebody, it’s easier shooting, stabbing, or poisoning them.

Shorting on bad news is common. Before we address the issue whether this is unethical for cybersecurity researchers, we should first address the ethics for anybody doing this. Muddy Waters already does this by investigating companies for fraudulent accounting practice, then shorting the stock while revealing the fraud.

Yes, it’s bad that Muddy Waters profits on the misfortunes of others, but it’s others who are doing fraud — who deserve it. [Snide capitalism trigger warning] To claim this is unethical means you are a typical socialist who believe the State should defend companies, even those who do illegal thing, in order to stop illegitimate/windfall profits. Supporting the ethics of this means you are a capitalist, who believe companies should succeed or fail on their own merits — which means bad companies need to fail, and investors in those companies should lose money.

Yes, this is bad for cybersec research. There is constant tension between cybersecurity researchers doing “responsible” (sic) research and companies lobbying congress to pass laws against it. We see this recently how Detroit lobbied for DMCA (copyright) rules to bar security research, and how the DMCA regulators gave us an exemption. MedSec’s action means now all medical devices manufacturers will now lobby congress for rules to stop MedSec — and the rest of us security researchers. The lack of public research means medical devices will continue to be flawed, which is worse for everyone.

Personally, I don’t care about this argument. How others might respond badly to my actions is not an ethical constraint on my actions. It’s like speech: that others may be triggered into lobbying for anti-speech laws is still not constraint on what ethics allow me to say.

There were no lies or betrayal in the research. For me, “ethics” is usually a problem of lying, cheating, theft, and betrayal. As long as these things don’t happen, then it’s ethically okay. If MedSec had been hired by St Jude, had promised to keep things private, and then later disclosed them, then we’d have an ethical problem. Or consider this: frequently clients ask me to lie or omit things in pentest reports. It’s an ethical quagmire. The quick answer, by the way, is “can you make that request in writing?”. The long answer is “no”. It’s ethically permissible to omit minor things or do minor rewording, but not when it impinges on my credibility.

A life is worth about $10-million. Most people agree that “you can’t put value on a human life”, and that those who do are evil. The opposite is true. Should we spend more on airplane safety, breast cancer research, or the military budget to fight ISIS. Each can be measured in the number of lives saved. Should we spend more on breast cancer research, which affects people in their 30s, or solving heart disease, which affects people’s in their 70s? All these decisions means putting value on human life, and sometimes putting different value on human life. Whether you think it’s ethical, it’s the way the world works.

Thus, we can measure this disclosure of 0day in terms of potential value of life lost, vs. potential value of life saved.

Is this market manipulation? This is more of a legal question than an ethical one, but people are discussing it. If the data is true, then it’s not “manipulation” — only if it’s false. As documented in this post, there’s good reason to doubt the complete truth of what Muddy Waters claims. I suspect it’ll cost Muddy Waters more in legal fees in the long run than they could possibly hope to gain in the short run. I recommend investment companies stick to areas of their own expertise (accounting fraud) instead of branching out into things like cyber where they really don’t grasp things.

This is again bad for security research. Frankly, we aren’t a trusted community, because we claim the “sky is falling” too often, and are proven wrong. As this is proven to be market manipulation, as the stock recovers back to its former level, and the scary stories of mass product recalls fail to emerge, we’ll be blamed yet again for being wrong. That hurts are credibility.

On the other the other hand, if any of the scary things Muddy Waters claims actually come to pass, then maybe people will start heading our warnings.

Ethics conclusion: I’m a die-hard troll, so therefore I’m going to vigorously defend the idea of shorting stock while dropping 0day. (Most of you appear to think it’s unethical — I therefore must disagree with you).  But I’m also a capitalist. This case creates an incentive to drop harmful 0days — but it creates an even greater incentive for device manufacturers not to have 0days to begin with. Thus, despite being a dishonest troll, I do sincerely support the ethics of this.

Conclusion

The two 0days are about crashing the device (killing the patient sooner) or draining the battery (killin them later). Both attacks require hours (if not days) in close proximity to the target. If you can get into the local network (such as through phishing), you might be able to hack the [email protected] monitor, which is in close proximity to the target for hours every night.

Muddy Waters thinks the security problems are severe enough that it’ll destroy St Jude’s $2.5 billion pacemaker business. The argument is flimsy. St Jude’s retort is equally flimsy.

My prediction: a year from now we’ll see little change in St Jude’s pacemaker business earners, while there may be some one time costs cleaning some stuff up. This will stop the shenanigans of future 0day+shorting, even when it’s valid, because nobody will believe researchers.

More on the Vulnerabilities Equities Process

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/more_on_the_vul.html

The Open Technology Institute of the New America Foundation has released a policy paper on the vulnerabilities equities process: “Bugs in the System: A Primer on the Software Vulnerability Ecosystem and its Policy Implications.”

Their policy recommendations:

  • Minimize participation in the vulnerability black market.
  • Establish strong, clear procedures for disclosure when it discovers and acquires vulnerability.
  • Establish rules for government hacking.
  • Support bug bounty programs.
  • Reform the DMCA and CFAA so they encourage responsible vulnerability disclosure.

It’s a good document, and worth reading.

EFF Lawsuit Takes on DMCA Section 1201: Research and Technology Restrictions Violate the First Amendment

Post Syndicated from jake original http://lwn.net/Articles/695118/rss

The Electronic Frontier Foundation (EFF) has announced that it is suing the US government over provisions in the Digital Millennium Copyright Act (DMCA). The suit has been filed on behalf of Andrew “bunnie” Huang, who has a blog post describing the reasons behind the suit. The EFF also explained why these DMCA provisions should be ruled unconstitutional:
These provisions—contained in Section 1201 of the DMCA—make it unlawful for people to get around the software that restricts access to lawfully-purchased copyrighted material, such as films, songs, and the computer code that controls vehicles, devices, and appliances. This ban applies even where people want to make noninfringing fair uses of the materials they are accessing.

Ostensibly enacted to fight music and movie piracy, Section 1201 has long served to restrict people’s ability to access, use, and even speak out about copyrighted materials—including the software that is increasingly embedded in everyday things. The law imposes a legal cloud over our rights to tinker with or repair the devices we own, to convert videos so that they can play on multiple platforms, remix a video, or conduct independent security research that would reveal dangerous security flaws in our computers, cars, and medical devices. It criminalizes the creation of tools to let people access and use those materials.”

GitHub’s 2015 Transparency Report

Post Syndicated from ris original http://lwn.net/Articles/692959/rss

GitHub has published
its 2015 transparency report. “This 2015 report details the types of
requests we receive for user accounts, user content, information about our
users, and other such information, and how we process those
requests. Transparency and trust are essential to GitHub and to the open
source community, and giving you access to information about these requests
can protect you, protect us, and help you feel safe as you work on
GitHub.
” The report notes that a significant number of requests for
removal of content are notices submitted under the Digital Millennium
Copyright Act, or the DMCA.

Conservancy’s Year In Review 2015

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2015/12/18/conservancy-yir.html

If you’ve noticed my blog a little silent the past few weeks, I’ve been
spending my blogging time in December writing blogs on Conservancy’s site
for Conservancy’s 2015:
Year in Review series
.

So far, these are the ones that were posted:

Karen Sandler Speaks about IRS Charity Issues
Bradley M. Kuhn Speaks About Future of Copyleft
Bradley and Karen Speak at FOSDEM 2015
Conservancy Wins DMCA Exception for Smart TVs

Generally speaking, if you want to keep up with my work, you probably
should subscribe not only to my blog but also to Conservancy’s. I tend to
crosspost the more personal pieces, but if something is purely a
Conservancy matter and doesn’t relate to usual things I write about here, I
don’t crosspost.

The Internet of Incompatible Things

Post Syndicated from Matthew Garrett original http://mjg59.dreamwidth.org/37522.html

I have an Amazon Echo. I also have a LIFX Smart Bulb. The Echo can integrate with Philips Hue devices, letting you control your lights by voice. It has no integration with LIFX. Worse, the Echo developer program is fairly limited – while the device’s built in code supports communicating with devices on your local network, the third party developer interface only allows you to make calls to remote sites[1]. It seemed like I was going to have to put up with either controlling my bedroom light by phone or actually getting out of bed to hit the switch.Then I found this article describing the implementation of a bridge between the Echo and Belkin Wemo switches, cunningly called Fauxmo. The Echo already supports controlling Wemo switches, and the code in question simply implements enough of the Wemo API to convince the Echo that there’s a bunch of Wemo switches on your network. When the Echo sends a command to them asking them to turn on or off, the code executes an arbitrary callback that integrates with whatever API you want.This seemed like a good starting point. There’s a free implementation of the LIFX bulb API called Lazylights, and with a quick bit of hacking I could use the Echo to turn my bulb on or off. But the Echo’s Hue support also allows dimming of lights, and that seemed like a nice feature to have. Tcpdump showed that asking the Echo to look for Hue devices resulted in similar UPnP discovery requests to it looking for Wemo devices, so extending the Fauxmo code seemed plausible. I signed up for the Philips developer program and then discovered that the terms and conditions explicitly forbade using any information on their site to implement any kind of Hue-compatible endpoint. So that was out. Thankfully enough people have written their own Hue code at various points that I could figure out enough of the protocol by searching Github instead, and now I have a branch of Fauxmo that supports searching for LIFX bulbs and presenting them as Hues[2].Running this on a machine on my local network is enough to keep the Echo happy, and I can now dim my bedroom light in addition to turning it on or off. But it demonstrates a somewhat awkward situation. Right now vendors have no real incentive to offer any kind of compatibility with each other. Instead they’re all trying to define their own ecosystems with their own incompatible protocols with the aim of forcing users to continue buying from them. Worse, they attempt to restrict developers from implementing any kind of compatibility layers. The inevitable outcome is going to be either stacks of discarded devices speaking abandoned protocols or a cottage industry of developers writing bridge code and trying to avoid DMCA takedowns.The dystopian future we’re heading towards isn’t Gibsonian giant megacorporations engaging in physical warfare, it’s one where buying a new toaster means replacing all your lightbulbs or discovering that the code making your home alarm system work is now considered a copyright infringement. Is there a market where I can invest in IP lawyers?[1] It also requires an additional phrase at the beginning of a request to indicate which third party app you want your query to go to, so it’s much more clumsy to make those requests compared to using a built-in app.[2] I only have one bulb, so as yet I haven’t added any support for groups.comment count unavailable comments

Everyone in USA: Comment against ACTA today!

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/02/15/acta.html

In the USA, the deadline for comments on ACTA
is today (Tuesday 15 February 2011) at 17:00 US/Eastern.
It’s absolutely imperative that every USA citizen submit a comment on
this. The Free
Software Foundation has details on how to do so
.

ACTA is a dangerous international agreement that would establish
additional criminal penalties, promulgate DMCA/EUCD-like legislation
around the world, and otherwise extend copyright law into places it
should not go. Copyright law is already much stronger than
anyone needs.

On a meta-point, it’s extremely important that USA citizens participate
in comment processes like this. The reason that things like ACTA can
happen in the USA is because most of the citizens don’t pay attention.
By way of hyperbolic fantasy, imagine if every citizen of the
USA wrote a letter today to Mr. McCoy about ACTA. It’d be a news story
on all the major news networks tonight, and would probably be in the
headlines in print/online news stories tomorrow. Our whole country
would suddenly be debating whether or not we should have criminal
penalties for copying TV shows, and whether breaking a DVD’s DRM should
be illegal.

Obviously, that fantasy won’t happen, but getting from where we are to
that wonderful fantasy is actually linear; each person who
writes to Mr. McCoy today makes a difference! Please take 15 minutes
out of your day today and do so. It’s the least you can do on this
issue.

The Free
Software Foundation has a sample letter you can use
if you don’t
have time to write your own. I wrote my own, giving some of my unique
perspective, which I include below.

The automated
system on regulations.gov
assigned this comment below the tracking
number of 80bef9a1 (cool, it’s in hex! 🙂

Stanford K. McCoy
Assistant U.S. Trade Representative for Intellectual Property and Innovation
Office of the United States Trade Representative
600 17th St NW
Washington, DC 20006

Re: ACTA Public Comments (Docket no. USTR-2010-0014)

Dear Mr. McCoy:

I am a USA citizen writing to urge that the USA not sign
ACTA. Copyright law already reaches too far. ACTA would extend
problematic, overly-broad copyright rules around the world and would
increase the already inappropriate criminal penalties for copyright
infringement here in the USA.

Both individually and as an agent of my employer, I am regularly involved
in copyright enforcement efforts to defend the Free Software license
called the GNU General Public License (GPL). I therefore think my
perspective can be uniquely contrasted with other copyright holders who
support ACTA.

Specifically, when engaging in copyright enforcement for the GPL, we treat
it as purely a civil issue, not a criminal one. We have been successful
in defending the rights of software authors in this regard without the
need for criminal penalties for the rampant copyright infringement that we
often encounter.

I realize that many powerful corporate copyright holders wish to see
criminal penalties for copyright infringement expanded. As someone who
has worked in the area of copyright enforcement regularly for 12 years, I
see absolutely no reason that any copyright infringement of any kind ever
should be considered a criminal matter. Copyright holders who believe
their rights have been infringed have the full power of civil law to
defend their rights. Using the power of government to impose criminal
penalties for copyright infringement is an inappropriate use of government
to interfere in civil disputes between its citizens.

Finally, ACTA would introduce new barriers for those of us trying to
change our copyright law here in the USA. The USA should neither impose
its desired copyright regime on other countries, nor should the USA bind
itself in international agreements on an issue where its citizens are in
great disagreement about correct policy.

Thank you for considering my opinion, and please do not allow the USA to
sign ACTA.

Sincerely,
Bradley M. Kuhn