Tag Archives: botnet

Class Breaks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/01/class_breaks.html

There’s a concept from computer security known as a class break. It’s a particular security vulnerability that breaks not just one system, but an entire class of systems. Examples might be a vulnerability in a particular operating system that allows an attacker to take remote control of every computer that runs on that system’s software. Or a vulnerability in Internet-enabled digital video recorders and webcams that allow an attacker to recruit those devices into a massive botnet.

It’s a particular way computer systems can fail, exacerbated by the characteristics of computers and software. It only takes one smart person to figure out how to attack the system. Once he does that, he can write software that automates his attack. He can do it over the Internet, so he doesn’t have to be near his victim. He can automate his attack so it works while he sleeps. And then he can pass the ability to someone­ — or to lots of people — ­without the skill. This changes the nature of security failures, and completely upends how we need to defend against them.

An example: Picking a mechanical door lock requires both skill and time. Each lock is a new job, and success at one lock doesn’t guarantee success with another of the same design. Electronic door locks, like the ones you now find in hotel rooms, have different vulnerabilities. An attacker can find a flaw in the design that allows him to create a key card that opens every door. If he publishes his attack software, not just the attacker, but anyone can now open every lock. And if those locks are connected to the Internet, attackers could potentially open door locks remotely — ­they could open every door lock remotely at the same time. That’s a class break.

It’s how computer systems fail, but it’s not how we think about failures. We still think about automobile security in terms of individual car thieves manually stealing cars. We don’t think of hackers remotely taking control of cars over the Internet. Or, remotely disabling every car over the Internet. We think about voting fraud as unauthorized individuals trying to vote. We don’t think about a single person or organization remotely manipulating thousands of Internet-connected voting machines.

In a sense, class breaks are not a new concept in risk management. It’s the difference between home burglaries and fires, which happen occasionally to different houses in a neighborhood over the course of the year, and floods and earthquakes, which either happen to everyone in the neighborhood or no one. Insurance companies can handle both types of risk, but they are inherently different. The increasing computerization of everything is moving us from a burglary/fire risk model to a flood/earthquake model, which a given threat either affects everyone in town or doesn’t happen at all.

But there’s a key difference between floods/earthquakes and class breaks in computer systems: the former are random natural phenomena, while the latter is human-directed. Floods don’t change their behavior to maximize their damage based on the types of defenses we build. Attackers do that to computer systems. Attackers examine our systems, looking for class breaks. And once one of them finds one, they’ll exploit it again and again until the vulnerability is fixed.

As we move into the world of the Internet of Things, where computers permeate our lives at every level, class breaks will become increasingly important. The combination of automation and action at a distance will give attackers more power and leverage than they have ever had before. Security notions like the precautionary principle­ — where the potential of harm is so great that we err on the side of not deploying a new technology without proofs of security — will become more important in a world where an attacker can open all of the door locks or hack all of the power plants. It’s not an inherently less secure world, but it’s a differently secure world. It’s a world where driverless cars are much safer than people-driven cars, until suddenly they’re not. We need to build systems that assume the possibility of class breaks — and maintain security despite them.

This essay originally appeared on Edge.org as part of their annual question. This year it was: “What scientific term or concept ought to be more widely known?

Reduce DDoS Risks Using Amazon Route 53 and AWS Shield

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/reduce-ddos-risks-using-amazon-route-53-and-aws-shield/

In late October of 2016 a large-scale cyber attack consisting of multiple denial of service attacks targeted a well-known DNS provider. The attack, consisting of a flood of DNS lookups from tens of millions of IP addresses, made many Internet sites and services unavailable to users in North America and Europe. This Distributed Denial of Service (DDoS) attack was believe to have been executed using a botnet consisting of a multitude of Internet-connected devices such as printers, camera, residential network gateways, and even baby monitors. These devices had been infected with the Mirai malware and generated several hundreds of gigabytes of traffic per second. Many corporate and educational networks simply do not have the capacity to absorb a volumetric attack of this size.

In the wake of this attack and others that have preceded it, our customers have been asking us for recommendations and best practices that will allow them to build systems that are more resilient to various types of DDoS attacks. The short-form answer involves a combination of scale, fault tolerance, and mitigation (the AWS Best Practices for DDoS Resiliency white paper goes in to far more detail) and makes use of Amazon Route 53 and AWS Shield (read AWS Shield – Protect Your Applications from DDoS Attacks to learn more).

Scale – Route 53 is hosted at numerous AWS edge locations, creating a global surface area capable of absorbing large amounts of DNS traffic. Other edge-based services, including Amazon CloudFront and AWS WAF, also have a global surface area and are also able to handle large amounts of traffic.

Fault Tolerance – Each edge location has many connections to the Internet. This allows for diverse paths and helps to isolate and contain faults. Route 53 also uses shuffle sharding and anycast striping to increase availability. With shuffle sharding, each name server in your delegation set corresponds to a unique set of edge locations. This arrangement increases fault tolerance and minimizes overlap between AWS customers. If one name server in the delegation set is not available, the client system or application will simply retry and receive a response from a name server at a different edge location. Anycast striping is used to direct DNS requests to an optimal location. This has the effect of spreading load and reducing DNS latency.

Mitigation – AWS Shield Standard protects you from 96% of today’s most common attacks. This includes SYN/ACK floods, Reflection attacks, and HTTP slow reads. As I noted in my post above, this protection is applied automatically and transparently to your Elastic Load Balancers, CloudFront distributions, and Route 53 resources at no extra cost. Protection (including deterministic packet filtering and priority based traffic shaping) is deployed to all AWS edge locations and inspects all traffic with just microseconds of overhead, all in a totally transparent fashion. AWS Shield Advanced includes additional DDoS mitigation capability, 24×7 access to our DDoS Response Team, real time metrics and reports, and DDoS cost protection.

To learn more, read the DDoS Resiliency white paper and learn about Route 53 anycast.

Jeff;

 

Ending The Year With A 650Gbps DDoS Attack

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/-ek2Q8ZIwYw/

It seems that 2016 has been the year of immense DDoS attacks, many coming from Mirai. This seems to be a newcomer though ending the year with a 650Gbps DDoS attack. The Dyn DNS DDoS attack that some speculated reached over 1Tbps was probably the biggest, but this isn’t that far behind and it’s bigger […]

The post Ending The Year With A…

Read the full post at darknet.org.uk

That "Commission on Enhancing Cybersecurity" is absurd

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/12/that-commission-on-enhancing.html

An Obama commission has publish a report on how to “Enhance Cybersecurity”. It’s promoted as having been written by neutral, bipartisan, technical experts. Instead, it’s almost entirely dominated by special interests and the Democrat politics of the outgoing administration.

In this post, I’m going through a random list of some of the 53 “action items” proposed by the documents. I show how they are policy issues, not technical issues. Indeed, much of the time the technical details are warped to conform to special interests.

IoT passwords

The recommendations include such things as Action Item 2.1.4:

Initial best practices should include requirements to mandate that IoT devices be rendered unusable until users first change default usernames and passwords. 

This recommendation for changing default passwords is repeated many times. It comes from the way the Mirai worm exploits devices by using hardcoded/default passwords.

But this is a misunderstanding of how these devices work. Take, for example, the infamous Xiongmai camera. It has user accounts on the web server to control the camera. If the user forgets the password, the camera can be reset to factory defaults by pressing a button on the outside of the camera.

But here’s the deal with security cameras. They are placed at remote sites miles away, up on the second story where people can’t mess with them. In order to reset them, you need to put a ladder in your truck and drive 30 minutes out to the site, then climb the ladder (an inherently dangerous activity). Therefore, Xiongmai provides a RESET.EXE utility for remotely resetting them. That utility happens to connect via Telnet using a hardcoded password.

The above report misunderstands what’s going on here. It sees Telnet and a hardcoded password, and makes assumptions. Some people assume that this is the normal user account — it’s not, it’s unrelated to the user accounts on the web server portion of the device. Requiring the user to change the password on the web service would have no effect on the Telnet service. Other people assume the Telnet service is accidental, that good security hygiene would remove it. Instead, it’s an intended feature of the product, to remotely reset the device. Fixing the “password” issue as described in the above recommendations would simply mean the manufacturer would create a different, custom backdoor that hackers would eventually reverse engineer, creating MiraiV2 botnet. Instead of security guides banning backdoors, they need to come up with standard for remote reset.

That characterization of Mirai as an IoT botnet is wrong. Mirai is a botnet of security cameras. Security cameras are fundamentally different from IoT devices like toasters and fridges because they are often exposed to the public Internet. To stream video on your phone from your security camera, you need a port open on the Internet. Non-camera IoT devices, however, are overwhelmingly protected by a firewall, with no exposure to the public Internet. While you can create a botnet of Internet cameras, you cannot create a botnet of Internet toasters.

The point I’m trying to demonstrate here is that the above report was written by policy folks with little grasp of the technical details of what’s going on. They use Mirai to justify several of their “Action Items”, none of which actually apply to the technical details of Mirai. It has little to do with IoT, passwords, or hygiene.

Public-private partnerships

Action Item 1.2.1: The President should create, through executive order, the National Cybersecurity Private–Public Program (NCP 3 ) as a forum for addressing cybersecurity issues through a high-level, joint public–private collaboration.

We’ve had public-private partnerships to secure cyberspace for over 20 years, such as the FBI InfraGuard partnership. President Clinton’s had a plan in 1998 to create a public-private partnership to address cyber vulnerabilities. President Bush declared public-private partnerships the “cornerstone of his 2003 plan to secure cyberspace.

Here we are 20 years later, and this document is full of new naive proposals for public-private partnerships There’s no analysis of why they have failed in the past, or a discussion of which ones have succeeded.

The many calls for public-private programs reflects the left-wing nature of this supposed “bipartisan” document, that sees government as a paternalistic entity that can help. The right-wing doesn’t believe the government provides any value in these partnerships. In my 20 years of experience with government private-partnerships in cybersecurity, I’ve found them to be a time waster at best and at worst, a way to coerce “voluntary measures” out of companies that hurt the public’s interest.

Build a wall and make China pay for it

Action Item 1.3.1: The next Administration should require that all Internet-based federal government services provided directly to citizens require the use of appropriately strong authentication.

This would cost at least $100 per person, for 300 million people, or $30 billion. In other words, it’ll cost more than Trump’s wall with Mexico.

Hardware tokens are cheap. Blizzard (a popular gaming company) must deal with widespread account hacking from “gold sellers”, and provides second factor authentication to its gamers for $6 each. But that ignores the enormous support costs involved. How does a person prove their identity to the government in order to get such a token? To replace a lost token? When old tokens break? What happens if somebody’s token is stolen?

And that’s the best case scenario. Other options, like using cellphones as a second factor, are non-starters.

This is actually not a bad recommendation, as far as government services are involved, but it ignores the costs and difficulties involved.

But then the recommendations go on to suggest this for private sector as well:

Specifically, private-sector organizations, including top online retailers, large health insurers, social media companies, and major financial institutions, should use strong authentication solutions as the default for major online applications.

No, no, no. There is no reason for a “top online retailer” to know your identity. I lie about my identity. Amazon.com thinks my name is “Edward Williams”, for example.

They get worse with:

Action Item 1.3.3: The government should serve as a source to validate identity attributes to address online identity challenges.

In other words, they are advocating a cyber-dystopic police-state wet-dream where the government controls everyone’s identity. We already see how this fails with Facebook’s “real name” policy, where everyone from political activists in other countries to LGBTQ in this country get harassed for revealing their real names.

Anonymity and pseudonymity are precious rights on the Internet that we now enjoy — rights endangered by the radical policies in this document. This document frequently claims to promote security “while protecting privacy”. But the government doesn’t protect privacy — much of what we want from cybersecurity is to protect our privacy from government intrusion. This is nothing new, you’ve heard this privacy debate before. What I’m trying to show here is that the one-side view of privacy in this document demonstrates how it’s dominated by special interests.

Cybersecurity Framework

Action Item 1.4.2: All federal agencies should be required to use the Cybersecurity Framework. 

The “Cybersecurity Framework” is a bunch of a nonsense that would require another long blogpost to debunk. It requires months of training and years of experience to understand. It contains things like “DE.CM-4: Malicious code is detected”, as if that’s a thing organizations are able to do.

All the while it ignores the most common cyber attacks (SQL/web injections, phishing, password reuse, DDoS). It’s a typical example where organizations spend enormous amounts of money following process while getting no closer to solving what the processes are attempting to solve. Federal agencies using the Cybersecurity Framework are no safer from my pentests than those who don’t use it.

It gets even crazier:

Action Item 1.5.1: The National Institute of Standards and Technology (NIST) should expand its support of SMBs in using the Cybersecurity Framework and should assess its cost-effectiveness specifically for SMBs.

Small businesses can’t even afford to even read the “Cybersecurity Framework”. Simply reading the doc, trying to understand it, would exceed their entire IT/computer budget for the year. It would take a high-priced consultant earning $500/hour to tell them that “DE.CM-4: Malicious code is detected” means “buy antivirus and keep it up to date”.

Software liability is a hoax invented by the Chinese to make our IoT less competitive

Action Item 2.1.3: The Department of Justice should lead an interagency study with the Departments of Commerce and Homeland Security and work with the Federal Trade Commission, the Consumer Product Safety Commission, and interested private sector parties to assess the current state of the law with regard to liability for harm caused by faulty IoT devices and provide recommendations within 180 days. 

For over a decade, leftists in the cybersecurity industry have been pushing the concept of “software liability”. Every time there is a major new development in hacking, such as the worms around 2003, they come out with documents explaining why there’s a “market failure” and that we need liability to punish companies to fix the problem. Then the problem is fixed, without software liability, and the leftists wait for some new development to push the theory yet again.

It’s especially absurd for the IoT marketspace. The harm, as they imagine, is DDoS. But the majority of devices in Mirai were sold by non-US companies to non-US customers. There’s no way US regulations can stop that.

What US regulations will stop is IoT innovation in the United States. Regulations are so burdensome, and liability lawsuits so punishing, that it will kill all innovation within the United States. If you want to get rich with a clever IoT Kickstarter project, forget about it: you entire development budget will go to cybersecurity. The only companies that will be able to afford to ship IoT products in the United States will be large industrial concerns like GE that can afford the overhead of regulation/liability.

Liability is a left-wing policy issue, not one supported by technical analysis. Software liability has proven to be immaterial in any past problem and current proponents are distorting the IoT market to promote it now.

Cybersecurity workforce

Action Item 4.1.1: The next President should initiate a national cybersecurity workforce program to train 100,000 new cybersecurity practitioners by 2020. 

The problem in our industry isn’t the lack of “cybersecurity practitioners”, but the overabundance of “insecurity practitioners”.

Take “SQL injection” as an example. It’s been the most common way hackers break into websites for 15 years. It happens because programmers, those building web-apps, blinding paste input into SQL queries. They do that because they’ve been trained to do it that way. All the textbooks on how to build webapps teach them this. All the examples show them this.

So you have government programs on one hand pushing tech education, teaching kids to build web-apps with SQL injection. Then you propose to train a second group of people to fix the broken stuff the first group produced.

The solution to SQL/website injections is not more practitioners, but stopping programmers from creating the problems in the first place. The solution to phishing is to use the tools already built into Windows and networks that sysadmins use, not adding new products/practitioners. These are the two most common problems, and they happen not because of a lack of cybersecurity practitioners, but because the lack of cybersecurity as part of normal IT/computers.

I point this to demonstrate yet against that the document was written by policy people with little or no technical understanding of the problem.

Nutritional label

Action Item 3.1.1: To improve consumers’ purchasing decisions, an independent organization should develop the equivalent of a cybersecurity “nutritional label” for technology products and services—ideally linked to a rating system of understandable, impartial, third-party assessment that consumers will intuitively trust and understand. 

This can’t be done. Grab some IoT devices, like my thermostat, my car, or a Xiongmai security camera used in the Mirai botnet. These devices are so complex that no “nutritional label” can be made from them.

One of the things you’d like to know is all the software dependencies, so that if there’s a bug in OpenSSL, for example, then you know your device is vulnerable. Unfortunately, that requires a nutritional label with 10,000 items on it.

Or, one thing you’d want to know is that the device has no backdoor passwords. But that would miss the Xiongmai devices. The web service has no backdoor passwords. If you caught the Telnet backdoor password and removed it, then you’d miss the special secret backdoor that hackers would later reverse engineer.

This is a policy position chasing a non-existent technical issue push by Pieter Zatko, who has gotten hundreds of thousands of dollars from government grants to push the issue. It’s his way of getting rich and has nothing to do with sound policy.

Cyberczars and ambassadors

Various recommendations call for the appointment of various CISOs, Assistant to the President for Cybersecurity, and an Ambassador for Cybersecurity. But nowhere does it mention these should be technical posts. This is like appointing a Surgeon General who is not a doctor.

Government’s problems with cybersecurity stems from the way technical knowledge is so disrespected. The current cyberczar prides himself on his lack of technical knowledge, because that helps him see the bigger picture.

Ironically, many of the other Action Items are about training cybersecurity practitioners, employees, and managers. None of this can happen as long as leadership is clueless. Technical details matter, as I show above with the Mirai botnet. Subtlety and nuance in technical details can call for opposite policy responses.

Conclusion

This document is promoted as being written by technical experts. However, nothing in the document is neutral technical expertise. Instead, it’s almost entirely a policy document dominated by special interests and left-wing politics. In many places it makes recommendations to the incoming Republican president. His response should be to round-file it immediately.

I only chose a few items, as this blogpost is long enough as it is. I could pick almost any of of the 53 Action Items to demonstrate how they are policy, special-interest driven rather than reflecting technical expertise.

A security update for Raspbian PIXEL

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/a-security-update-for-raspbian-pixel/

The more observant among you may have spotted that we’ve recently updated the Raspbian-with-PIXEL image available from Downloads. With any major release of the OS, we usually find a few small bugs and other issues as soon as the wider community start using it, and so we gather up the fixes and produce a 1.1 release a few weeks later. We don’t make a fuss about these bug fix releases, as there’s no new functionality; these are just fixes to make things work as originally intended.

However, in this case, we’ve made a couple of important changes. They won’t be noticed by many users, but to those who do notice them and who will be affected by them, we should explain ourselves!

Why have we changed things?

Anyone following tech media over the last few months will have seen the stories about botnets running on Internet of Things devices. Hackers are using the default passwords on webcams and the like to create a network capable of sending enough requests to a website to cause it to grind to a halt.

news

With the Pi, we’ve always tried to keep it as open as possible. We provide a default user account with a default password, and this account can use sudo to control or modify anything without a password; this makes life much easier for beginners. We also have an open SSH port by default, so that people who are using a Pi remotely can just install the latest Raspbian image, plug it in, and control their Pi with no configuration required; again, this makes life easier.

Unfortunately, hackers are increasingly exploiting loopholes such as these in other products to enable them to invisibly take control of devices. In general, this has not been a problem for Pis. If a Pi is on a private network in your home, it’s unlikely that an attacker can reach it; if you’re putting a Pi on a public network, we’ve hoped that you know enough about the issues involved to change the default password or turn off SSH.

But the threat of hacking has now got to the point where we can see that we need to change our approach. Much as we hate to impose restrictions on users, we would also hate for our relatively relaxed approach to security to cause far worse problems. With this release, therefore, we’ve made a couple of small changes to improve security, which should be enough to make it extremely hard to hijack a Pi, while not making life too difficult for users.

What has changed?

First, from now on SSH will be disabled by default on our images. SSH (Secure SHell) is a networking protocol which allows you to remotely log into a Linux computer and control it from a remote command line. As mentioned above, many Pi owners use it to install a Pi headless (without screen or keyboard) and control it from another PC.

In the past, SSH was enabled by default, so people using their Pi headless could easily update their SD card to a new image. Switching SSH on or off has always required the use of raspi-config or the Raspberry Pi Configuration application, but to access those, you need a screen and keyboard connected to the Pi itself, which is not the case in headless applications. So we’ve provided a simple mechanism for enabling SSH before an image is booted.

The boot partition on a Pi should be accessible from any machine with an SD card reader, on Windows, Mac, or Linux. If you want to enable SSH, all you need to do is to put a file called ssh in the /boot/ directory. The contents of the file don’t matter: it can contain any text you like, or even nothing at all. When the Pi boots, it looks for this file; if it finds it, it enables SSH and then deletes the file. SSH can still be turned on or off from the Raspberry Pi Configuration application or raspi-config; this is simply an additional way to turn it on if you can’t easily run either of those applications.

rconf

The risk with an open SSH port is that someone can access it and log in; to do this, they need a user account and a password. Out of the box, all Raspbian installs have the default user account ‘pi’ with the password ‘raspberry’. If you’re enabling SSH, you should really change the password for the ‘pi’ user to prevent a hacker using the defaults. To encourage this, we’ve added warnings to the boot process. If SSH is enabled, and the password for the ‘pi’ user is still ‘raspberry’, you’ll see a warning message whenever you boot the Pi, whether to the desktop or the command line. We’re not enforcing password changes, but you’ll be warned whenever you boot if your Pi is potentially at risk.

warn

Our hope is that these (relatively minor) changes will not cause too much inconvenience, but they will make it much harder for hackers to attack the Pi.

Is there anything I need to do to protect my Pi?

We should stress at this point that there’s no need to panic! We are not aware of Pis being used in botnets or being taken over in large numbers; your own Pi is almost certainly not currently hacked.

It’s still good practice to protect yourself to avoid problems in future. We therefore suggest that you use the Raspberry Pi Configuration application or raspi-config to disable SSH if you’re not using it, and also change the password for the ‘pi’ user if it’s still ‘raspberry’.

To change the password, you can either press the ‘Change Password’ button in Raspberry Pi Configuration, or type passwd at the command line, and follow the prompts.

cpass

This issue has caused quite a lot of discussion at Pi Towers. The relaxed approach we’ve taken thus far has been for very good reasons, and we’re reluctant to change it. However, we feel that these changes are necessary to protect our users from potential threats now and in the future, and we hope you can understand our reasoning.

How do I get the updates?

The latest Raspbian with PIXEL image is available from the Downloads page on our website now. Note that the uncompressed image is over 4GB in size, and some older unzippers will fail to decompress it properly. If you have problems, use 7-Zip on Windows and The Unarchiver on Mac; both are free applications which have been tested and will decompress the file correctly.

To update your existing Jessie image with all the bug fixes and these new security changes, type the following at the command line:

sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install -y pprompt

and then reboot.

The post A security update for Raspbian PIXEL appeared first on Raspberry Pi.

Regulation of the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/regulation_of_t.html

Late last month, popular websites like Twitter, Pinterest, Reddit and PayPal went down for most of a day. The distributed denial-of-service attack that caused the outages, and the vulnerabilities that made the attack possible, was as much a failure of market and policy as it was of technology. If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the “Internet of Things” and increased regulation of what are now critical and life-threatening technologies. It’s no longer a question of if, it’s a question of when.

First, the facts. Those websites went down because their domain name provider — a company named Dyn —­ was forced offline. We don’t know who perpetrated that attack, but it could have easily been a lone hacker. Whoever it was launched a distributed denial-of-service attack against Dyn by exploiting a vulnerability in large numbers ­— possibly millions — of Internet-of-Things devices like webcams and digital video recorders, then recruiting them all into a single botnet. The botnet bombarded Dyn with traffic, so much that it went down. And when it went down, so did dozens of websites.

Your security on the Internet depends on the security of millions of Internet-enabled devices, designed and sold by companies you’ve never heard of to consumers who don’t care about your security.

The technical reason these devices are insecure is complicated, but there is a market failure at work. The Internet of Things is bringing computerization and connectivity to many tens of millions of devices worldwide. These devices will affect every aspect of our lives, because they’re things like cars, home appliances, thermostats, light bulbs, fitness trackers, medical devices, smart streetlights and sidewalk squares. Many of these devices are low-cost, designed and built offshore, then rebranded and resold. The teams building these devices don’t have the security expertise we’ve come to expect from the major computer and smartphone manufacturers, simply because the market won’t stand for the additional costs that would require. These devices don’t get security updates like our more expensive computers, and many don’t even have a way to be patched. And, unlike our computers and phones, they stay around for years and decades.

An additional market failure illustrated by the Dyn attack is that neither the seller nor the buyer of those devices cares about fixing the vulnerability. The owners of those devices don’t care. They wanted a webcam —­ or thermostat, or refrigerator ­— with nice features at a good price. Even after they were recruited into this botnet, they still work fine ­— you can’t even tell they were used in the attack. The sellers of those devices don’t care: They’ve already moved on to selling newer and better models. There is no market solution because the insecurity primarily affects other people. It’s a form of invisible pollution.

And, like pollution, the only solution is to regulate. The government could impose minimum security standards on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing companies like Dyn to sue them if their devices are used in DDoS attacks. The details would need to be carefully scoped, but either of these options would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

It’s true that this is a domestic solution to an international problem and that there’s no U.S. regulation that will affect, say, an Asian-made product sold in South America, even though that product could still be used to take down U.S. websites. But the main costs in making software come from development. If the United States and perhaps a few other major markets implement strong Internet-security regulations on IoT devices, manufacturers will be forced to upgrade their security if they want to sell to those markets. And any improvements they make in their software will be available in their products wherever they are sold, simply because it makes no sense to maintain two different versions of the software. This is truly an area where the actions of a few countries can drive worldwide change.

Regardless of what you think about regulation vs. market solutions, I believe there is no choice. Governments will get involved in the IoT, because the risks are too great and the stakes are too high. Computers are now able to affect our world in a direct and physical manner.

Security researchers have demonstrated the ability to remotely take control of Internet-enabled cars. They’ve demonstrated ransomware against home thermostats and exposed vulnerabilities in implanted medical devices. They’ve hacked voting machines and power plants. In one recent paper, researchers showed how a vulnerability in smart light bulbs could be used to start a chain reaction, resulting in them all being controlled by the attackers ­— that’s every one in a city. Security flaws in these things could mean people dying and property being destroyed.

Nothing motivates the U.S. government like fear. Remember 2001? A small-government Republican president created the Department of Homeland Security in the wake of the 9/11 terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for more than a decade. A fatal IoT disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important and complex ­— and they’re coming. We can’t afford to ignore these issues until it’s too late.

In general, the software market demands that products be fast and cheap and that security be a secondary consideration. That was okay when software didn’t matter —­ it was okay that your spreadsheet crashed once in a while. But a software bug that literally crashes your car is another thing altogether. The security vulnerabilities in the Internet of Things are deep and pervasive, and they won’t get fixed if the market is left to sort it out for itself. We need to proactively discuss good regulatory solutions; otherwise, a disaster will impose bad ones on us.

This essay previously appeared in the Washington Post.

Lessons From the Dyn DDoS Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/lessons_from_th_5.html

A week ago Friday, someone took down numerous popular websites in a massive distributed denial-of-service (DDoS) attack against the domain name provider Dyn. DDoS attacks are neither new nor sophisticated. The attacker sends a massive amount of traffic, causing the victim’s system to slow to a crawl and eventually crash. There are more or less clever variants, but basically, it’s a datapipe-size battle between attacker and victim. If the defender has a larger capacity to receive and process data, he or she will win. If the attacker can throw more data than the victim can process, he or she will win.

The attacker can build a giant data cannon, but that’s expensive. It is much smarter to recruit millions of innocent computers on the internet. This is the “distributed” part of the DDoS attack, and pretty much how it’s worked for decades. Cybercriminals infect innocent computers around the internet and recruit them into a botnet. They then target that botnet against a single victim.

You can imagine how it might work in the real world. If I can trick tens of thousands of others to order pizzas to be delivered to your house at the same time, I can clog up your street and prevent any legitimate traffic from getting through. If I can trick many millions, I might be able to crush your house from the weight. That’s a DDoS attack ­ it’s simple brute force.

As you’d expect, DDoSers have various motives. The attacks started out as a way to show off, then quickly transitioned to a method of intimidation ­ or a way of just getting back at someone you didn’t like. More recently, they’ve become vehicles of protest. In 2013, the hacker group Anonymous petitioned the White House to recognize DDoS attacks as a legitimate form of protest. Criminals have used these attacks as a means of extortion, although one group found that just the fear of attack was enough. Military agencies are also thinking about DDoS as a tool in their cyberwar arsenals. A 2007 DDoS attack against Estonia was blamed on Russia and widely called an act of cyberwar.

The DDoS attack against Dyn two weeks ago was nothing new, but it illustrated several important trends in computer security.

These attack techniques are broadly available. Fully capable DDoS attack tools are available for free download. Criminal groups offer DDoS services for hire. The particular attack technique used against Dyn was first used a month earlier. It’s called Mirai, and since the source code was released four weeks ago, over a dozen botnets have incorporated the code.

The Dyn attacks were probably not originated by a government. The perpetrators were most likely hackers mad at Dyn for helping Brian Krebs identify ­ and the FBI arrest ­ two Israeli hackers who were running a DDoS-for-hire ring. Recently I have written about probing DDoS attacks against internet infrastructure companies that appear to be perpetrated by a nation-state. But, honestly, we don’t know for sure.

This is important. Software spreads capabilities. The smartest attacker needs to figure out the attack and write the software. After that, anyone can use it. There’s not even much of a difference between government and criminal attacks. In December 2014, there was a legitimate debate in the security community as to whether the massive attack against Sony had been perpetrated by a nation-state with a $20 billion military budget or a couple of guys in a basement somewhere. The internet is the only place where we can’t tell the difference. Everyone uses the same tools, the same techniques and the same tactics.

These attacks are getting larger. The Dyn DDoS attack set a record at 1.2 Tbps. The previous record holder was the attack against cybersecurity journalist Brian Krebs a month prior at 620 Gbps. This is much larger than required to knock the typical website offline. A year ago, it was unheard of. Now it occurs regularly.

The botnets attacking Dyn and Brian Krebs consisted largely of unsecure Internet of Things (IoT) devices ­ webcams, digital video recorders, routers and so on. This isn’t new, either. We’ve already seen internet-enabled refrigerators and TVs used in DDoS botnets. But again, the scale is bigger now. In 2014, the news was hundreds of thousands of IoT devices ­ the Dyn attack used millions. Analysts expect the IoT to increase the number of things on the internet by a factor of 10 or more. Expect these attacks to similarly increase.

The problem is that these IoT devices are unsecure and likely to remain that way. The economics of internet security don’t trickle down to the IoT. Commenting on the Krebs attack last month, I wrote:

The market can’t fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don’t care. Their devices were cheap to buy, they still work, and they don’t even know Brian. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

To be fair, one company that made some of the unsecure things used in these attacks recalled its unsecure webcams. But this is more of a publicity stunt than anything else. I would be surprised if the company got many devices back. We already know that the reputational damage from having your unsecure software made public isn’t large and doesn’t last. At this point, the market still largely rewards sacrificing security in favor of price and time-to-market.

DDoS prevention works best deep in the network, where the pipes are the largest and the capability to identify and block the attacks is the most evident. But the backbone providers have no incentive to do this. They don’t feel the pain when the attacks occur and they have no way of billing for the service when they provide it. So they let the attacks through and force the victims to defend themselves. In many ways, this is similar to the spam problem. It, too, is best dealt with in the backbone, but similar economics dump the problem onto the endpoints.

We’re unlikely to get any regulation forcing backbone companies to clean up either DDoS attacks or spam, just as we are unlikely to get any regulations forcing IoT manufacturers to make their systems secure. This is me again:

What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

That leaves the victims to pay. This is where we are in much of computer security. Because the hardware, software and networks we use are so unsecure, we have to pay an entire industry to provide after-the-fact security.

There are solutions you can buy. Many companies offer DDoS protection, although they’re generally calibrated to the older, smaller attacks. We can safely assume that they’ll up their offerings, although the cost might be prohibitive for many users. Understand your risks. Buy mitigation if you need it, but understand its limitations. Know the attacks are possible and will succeed if large enough. And the attacks are getting larger all the time. Prepare for that.

This essay previously appeared on the SecurityIntelligence website.

Five Mistakes Everyone Makes With Cloud Backup

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/5-common-cloud-backup-mistakes/

cloud backup error

Cloud-based storage and file sync services are ubiquitous: Everywhere we turn new services pop up (and often shut down), promising free or low-cost storage of everything and anything on our computers and mobile devices.

When you depend on the cloud it’s very easy to get lulled into a false sense of security. Don’t. Here are five common mistakes all of us make with cloud backup and sync services. I’ve added suggestions for how to avoid these pitfalls.

Assuming the Cloud Is Backing Things Up

“I have iCloud or Google Drive, so everything’s backed up.”

Some cloud backup and file sync services make it really easy to put files online, but they may not be all the files you need. Don’t just assume the cloud services you use are doing a complete backup of your device – check to see what is actually being backed up. The services you use may only back up specific folders or directories on your computer’s hard drive.

Read this for more info on how Backblaze backs up.

There’s a big difference between file backup services and sync services, by the way. Which brings me to my next point:

Confusing Sync for Backup

“I don’t need backup. I’ve got my files synced.”

Sync service enables you to keep consistent contents between multiple devices – think Dropbox or iCloud Drive, for example. Make one change to the contents of that shared info, and the same thing happens across all devices, including file changes and deletions. Depending on how you have syncing and sharing set-up you can delete a file on one device and have it disappear on all the other shared devices.

I’ve also found it handy to have a backup service that enables you to restore multiple versions. In point of fact, Dropbox lets you restore previous versions. Apple’s Time Machine, built into the Mac, does this too. So does Backblaze (we keep track of multiple versions up to 30 days). Not to say you shouldn’t use Dropbox, we do! We wrote about how we are complimentary services.

Thinking One Backup Is Enough

“Hey, I’m backing up to the cloud. That’s better than nothing, right?”

It’s better than nothing but it’s not enough. You want a local backup too. That’s why I recommend a 3-2-1 Backup strategy. In addition to the “live” copy of the data on your hard drive, make sure you have a local backup, and use the cloud for offsite storage. Likewise, if you’re only storing data on a local backup, you’re putting all your eggs in that basket. Add offsite backup to complete your backup strategy. Conversely, if you only store your data in the cloud, you’re susceptible to those services being down as well. So having a local copy can keep you productive even if your favorite service is temporarily down.

Leaving Things Insecure

“I’m not backing up anything important enough for hackers to bother with.”

With identity theft on the rise, the security of all of your data online should be paramount. Strong encryption is important, so make sure it’s supported by the services you depend on.

Even if a bad actor doesn’t want your data, they still may want your computer for nefarious purposes, like driving a botnet used to launch a DDOS (Distributed Denial of Service) attack. That’s exactly what recently happened to Dyn, a company that provides core Internet services for other popular Internet services like Twitter and Spotify.

Make sure to protect your computer with strong passwords, practice safe surfing and keep your computer updated with the latest software. Also check periodically for malware and get rid of it when you find it.

Thinking That it’s Taken Care Of

“I have a backup strategy in place, so I don’t have to think about it anymore.”

I think it’s wise to observe an old aphorism: “Trust but verify.”

There’s absolutely nothing wrong with developing an automated backup strategy. But it’s vitally important to periodically test your backups to make sure they’re doing what they’re supposed to.

You should test your most important, mission-critical data first. Tax returns? Important legal documents? Irreplaceable baby pictures? Make sure the files that are important to you are retrievable and intact by actually trying to recover them. Find out more about how to test your backup.

Backblaze too. Test all your backups – we even recommend it in our Best Practices.

Got more cloud backup myths to bust? Share them with me in the comments!

 

The post Five Mistakes Everyone Makes With Cloud Backup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Fixing the IoT isn’t going to be easy

Post Syndicated from Matthew Garrett original http://mjg59.dreamwidth.org/45098.html

A large part of the internet became inaccessible today after a botnet made up of IP cameras and digital video recorders was used to DoS a major DNS provider. This highlighted a bunch of things including how maybe having all your DNS handled by a single provider is not the best of plans, but in the long run there’s no real amount of diversification that can fix this – malicious actors have control of a sufficiently large number of hosts that they could easily take out multiple providers simultaneously.

To fix this properly we need to get rid of the compromised systems. The question is how. Many of these devices are sold by resellers who have no resources to handle any kind of recall. The manufacturer may not have any kind of legal presence in many of the countries where their products are sold. There’s no way anybody can compel a recall, and even if they could it probably wouldn’t help. If I’ve paid a contractor to install a security camera in my office, and if I get a notification that my camera is being used to take down Twitter, what do I do? Pay someone to come and take the camera down again, wait for a fixed one and pay to get that put up? That’s probably not going to happen. As long as the device carries on working, many users are going to ignore any voluntary request.

We’re left with more aggressive remedies. If ISPs threaten to cut off customers who host compromised devices, we might get somewhere. But, inevitably, a number of small businesses and unskilled users will get cut off. Probably a large number. The economic damage is still going to be significant. And it doesn’t necessarily help that much – if the US were to compel ISPs to do this, but nobody else did, public outcry would be massive, the botnet would not be much smaller and the attacks would continue. Do we start cutting off countries that fail to police their internet?

Ok, so maybe we just chalk this one up as a loss and have everyone build out enough infrastructure that we’re able to withstand attacks from this botnet and take steps to ensure that nobody is ever able to build a bigger one. To do that, we’d need to ensure that all IoT devices are secure, all the time. So, uh, how do we do that?

These devices had trivial vulnerabilities in the form of hardcoded passwords and open telnet. It wouldn’t take terribly strong skills to identify this at import time and block a shipment, so the “obvious” answer is to set up forces in customs who do a security analysis of each device. We’ll ignore the fact that this would be a pretty huge set of people to keep up with the sheer quantity of crap being developed and skip straight to the explanation for why this wouldn’t work.

Yeah, sure, this vulnerability was obvious. But what about the product from a well-known vendor that included a debug app listening on a high numbered UDP port that accepted a packet of the form “BackdoorPacketCmdLine_Req” and then executed the rest of the payload as root? A portscan’s not going to show that up[1]. Finding this kind of thing involves pulling the device apart, dumping the firmware and reverse engineering the binaries. It typically takes me about a day to do that. Amazon has over 30,000 listings that match “IP camera” right now, so you’re going to need 99 more of me and a year just to examine the cameras. And that’s assuming nobody ships any new ones.

Even that’s insufficient. Ok, with luck we’ve identified all the cases where the vendor has left an explicit backdoor in the code[2]. But these devices are still running software that’s going to be full of bugs and which is almost certainly still vulnerable to at least half a dozen buffer overflows[3]. Who’s going to audit that? All it takes is one attacker to find one flaw in one popular device line, and that’s another botnet built.

If we can’t stop the vulnerabilities getting into people’s homes in the first place, can we at least fix them afterwards? From an economic perspective, demanding that vendors ship security updates whenever a vulnerability is discovered no matter how old the device is is just not going to work. Many of these vendors are small enough that it’d be more cost effective for them to simply fold the company and reopen under a new name than it would be to put the engineering work into fixing a decade old codebase. And how does this actually help? So far the attackers building these networks haven’t been terribly competent. The first thing a competent attacker would do would be to silently disable the firmware update mechanism.

We can’t easily fix the already broken devices, we can’t easily stop more broken devices from being shipped and we can’t easily guarantee that we can fix future devices that end up broken. The only solution I see working at all is to require ISPs to cut people off, and that’s going to involve a great deal of pain. The harsh reality is that this is almost certainly just the tip of the iceberg, and things are going to get much worse before they get any better.

Right. I’m off to portscan another smart socket.

[1] UDP connection refused messages are typically ratelimited to one per second, so it’ll take almost a day to do a full UDP portscan, and even then you have no idea what the service actually does.

[2] It’s worth noting that this is usually leftover test or debug code, not an overtly malicious act. Vendors should have processes in place to ensure that this isn’t left in release builds, but ha well.

[3] My vacuum cleaner crashes if I send certain malformed HTTP requests to the local API endpoint, which isn’t a good sign

comment count unavailable comments

Fixing the IoT isn’t going to be easy

Post Syndicated from Matthew Garrett original https://mjg59.dreamwidth.org/45098.html

A large part of the internet became inaccessible today after a botnet made up of IP cameras and digital video recorders was used to DoS a major DNS provider. This highlighted a bunch of things including how maybe having all your DNS handled by a single provider is not the best of plans, but in the long run there’s no real amount of diversification that can fix this – malicious actors have control of a sufficiently large number of hosts that they could easily take out multiple providers simultaneously.

To fix this properly we need to get rid of the compromised systems. The question is how. Many of these devices are sold by resellers who have no resources to handle any kind of recall. The manufacturer may not have any kind of legal presence in many of the countries where their products are sold. There’s no way anybody can compel a recall, and even if they could it probably wouldn’t help. If I’ve paid a contractor to install a security camera in my office, and if I get a notification that my camera is being used to take down Twitter, what do I do? Pay someone to come and take the camera down again, wait for a fixed one and pay to get that put up? That’s probably not going to happen. As long as the device carries on working, many users are going to ignore any voluntary request.

We’re left with more aggressive remedies. If ISPs threaten to cut off customers who host compromised devices, we might get somewhere. But, inevitably, a number of small businesses and unskilled users will get cut off. Probably a large number. The economic damage is still going to be significant. And it doesn’t necessarily help that much – if the US were to compel ISPs to do this, but nobody else did, public outcry would be massive, the botnet would not be much smaller and the attacks would continue. Do we start cutting off countries that fail to police their internet?

Ok, so maybe we just chalk this one up as a loss and have everyone build out enough infrastructure that we’re able to withstand attacks from this botnet and take steps to ensure that nobody is ever able to build a bigger one. To do that, we’d need to ensure that all IoT devices are secure, all the time. So, uh, how do we do that?

These devices had trivial vulnerabilities in the form of hardcoded passwords and open telnet. It wouldn’t take terribly strong skills to identify this at import time and block a shipment, so the “obvious” answer is to set up forces in customs who do a security analysis of each device. We’ll ignore the fact that this would be a pretty huge set of people to keep up with the sheer quantity of crap being developed and skip straight to the explanation for why this wouldn’t work.

Yeah, sure, this vulnerability was obvious. But what about the product from a well-known vendor that included a debug app listening on a high numbered UDP port that accepted a packet of the form “BackdoorPacketCmdLine_Req” and then executed the rest of the payload as root? A portscan’s not going to show that up[1]. Finding this kind of thing involves pulling the device apart, dumping the firmware and reverse engineering the binaries. It typically takes me about a day to do that. Amazon has over 30,000 listings that match “IP camera” right now, so you’re going to need 99 more of me and a year just to examine the cameras. And that’s assuming nobody ships any new ones.

Even that’s insufficient. Ok, with luck we’ve identified all the cases where the vendor has left an explicit backdoor in the code[2]. But these devices are still running software that’s going to be full of bugs and which is almost certainly still vulnerable to at least half a dozen buffer overflows[3]. Who’s going to audit that? All it takes is one attacker to find one flaw in one popular device line, and that’s another botnet built.

If we can’t stop the vulnerabilities getting into people’s homes in the first place, can we at least fix them afterwards? From an economic perspective, demanding that vendors ship security updates whenever a vulnerability is discovered no matter how old the device is is just not going to work. Many of these vendors are small enough that it’d be more cost effective for them to simply fold the company and reopen under a new name than it would be to put the engineering work into fixing a decade old codebase. And how does this actually help? So far the attackers building these networks haven’t been terribly competent. The first thing a competent attacker would do would be to silently disable the firmware update mechanism.

We can’t easily fix the already broken devices, we can’t easily stop more broken devices from being shipped and we can’t easily guarantee that we can fix future devices that end up broken. The only solution I see working at all is to require ISPs to cut people off, and that’s going to involve a great deal of pain. The harsh reality is that this is almost certainly just the tip of the iceberg, and things are going to get much worse before they get any better.

Right. I’m off to portscan another smart socket.

[1] UDP connection refused messages are typically ratelimited to one per second, so it’ll take almost a day to do a full UDP portscan, and even then you have no idea what the service actually does.

[2] It’s worth noting that this is usually leftover test or debug code, not an overtly malicious act. Vendors should have processes in place to ensure that this isn’t left in release builds, but ha well.

[3] My vacuum cleaner crashes if I send certain malformed HTTP requests to the local API endpoint, which isn’t a good sign

comment count unavailable comments

Some notes on today’s DNS DDoS

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/some-notes-on-todays-dns-ddos.html

Some notes on today’s DNS outages due to DDoS.

We lack details. As a techy, I want to know the composition of the traffic. Is it blindly overflowing incoming links with junk traffic? Or is it cleverly sending valid DNS requests, overloading the ability of servers to respond, and overflowing outgoing link (as responses are five times or more as big as requests). Such techy details and more make a big difference. Was Dyn the only target? Why were non-Dyn customers effected?

Nothing to do with the IANA handover. So this post blames Obama for handing control of DNS to the Russians, or some such. It’s silly, and not a shred of truth to it. For the record, I’m (or was) a Republican and opposed handing over the IANA. But the handover was a symbolic transition of a minor clerical function to a body that isn’t anything like the U.N. The handover has nothing to do with either Obama or today’s DDoS. There’s no reason to blame this on Obama, other than the general reason that he’s to blame for everything bad that happened in the last 8 years.

It’s not a practice attack. A Bruce Schneier post created the idea of hacking doing “practice” DDoS. That’s not how things work. Using a botnot for DDoS always degrades it, as owners of machines find the infections and remove them. The people getting the most practice are the defenders, who learn more from the incident than the attackers do.

It’s not practice for Nov. 8. I tweeted a possible connection to the election because I thought it’d be self-evidently a troll, but a lot of good, intelligent, well-meaning people took it seriously. A functioning Internet is not involved in counting the votes anywhere, so it’s hard to see how any Internet attack can “rig” the election. DDoSing news sources like CNN might be fun — a blackout of news might make some people go crazy and riot in the streets. Imagine if Twitter went down while people were voting. With this said, we may see DDoS anyway — lots of kids control large botnets, so it may happen on election day because they can, not because it changes anything.

Dyn stupidly uses BIND. According to “version.bind” queries, Dyn (the big DNS provider that is a major target) uses BIND. This is the most popular DNS server software, but it’s wrong. It 10x to 100x slower than alternatives, meaning that they need 100x more server hardware in order to deal with DDoS attacks. BIND is also 10x more complex — it strives to be the reference implementation that contains all DNS features, rather than a simple bit of software that just handles this one case. BIND should never be used for Internet-facing DNS, packages like KnotDNS and NSD should be used instead.

Fixing IoT. The persistent rumor is that an IoT botnet is being used. So everything is calling for regulations to secure IoT devices. This is extraordinarily bad. First of all, most of the devices are made in China and shipped to countries not in the United States, so there’s little effect our regulations can have. Except they would essentially kill the Kickstarter community coming up with innovative IoT devices. Only very large corporations can afford the regulatory burden involved. Moreover, it’s unclear what “security” means. There no real bug/vulnerability being exploited here other than default passwords — something even the US government has at times refused to recognize as a security “vulnerability”.

Fixing IoT #2. People have come up with many ways default passwords might be solved, such as having a sticker on the device with a randomly generated password. Getting the firmware to match a printed sticker during manufacturing is a hard, costly problem. I mean, they do it all the time for other reasons, but it starts to become a burden for cheaper device. But in any event, the correct solution is connecting via Bluetooth. That seems to be the most popular solution these days from Wimo to Echo. Most of the popular WiFi chips come with Bluetooth, so it’s really no burden for make devices this way.

It’s not IoT. The Mirai botnet primarily infected DVRs connected to security cameras. In other words, it didn’t infect baby monitors or other IoT devices insider your home, which are protected by your home firewall anyway. Instead, Mirai infected things that were outside in the world that needed their own IP address.

DNS failures cause email failures. When DNS goes down, legitimate email gets reclassified as spam, and dropped by spam filters

It’s all about that TTL. You don’t contact a company’s DNS server directly. Instead, you contact your ISPs “cache”. How long something stays in that cache is determined by what’s known as the TTL or “time to live”. Long TTLs mean that if a company wants to move servers around, they’ll have to wait until for until caches have finally aged out old data. Short TTLs mean changes propagate quickly. Any company that had 24 hours as their TTL was mostly unaffected by the attack. Twitter has a TTL of 205 seconds, meaning it only takes 4 minutes of DDoS against the DNS server to take Twitter offline. One strategy, which apparently Cisco OpenDNS uses, is to retain old records in its cache if it can’t find new ones, regardless of the TTL. Using their servers, instead of your ISPs, can fix DNS DDoS for you:

Why not use anycast?

The attack took down only east-coast operations, attacking only part of Dyn’s infrastructure located there. Other DNS providers, such as Google’s famed 8.8.8.8 resolver, do not have a single location. They instead us anycasting, routing packets to one of many local servers, in many locations, rather than a single server in one location. In other words, if you are in Australia and use Google’s 8.8.8.8 resolver, you’ll be sending requests to a server located in Australia, and not in Google’s headquarters.

The problem with anycasting is it technically only works for UDP. That’s because each packet finds its own way through the Internet. Two packets sent back-to-back to 8.8.8.8 may, in fact, hit different servers. This makes it impossible to establish a TCP connection, which requires all packets be sent to the same server. Indeed, when I test it here at home, I get back different responses to the same DNS query done back-to-back to 8.8.8.8, hinting that my request is being handled by different servers.

Historically, DNS has used only UDP, so that hasn’t been a problem. It still isn’t a problem for “root servers”, which server only simple responses. However, it’s becoming a problem for normal DNS servers, which give complex answers that can require multiple packets to hold a response. This is true for DNSSEC and things like DKIM (email authentication). That TCP might sometimes fail therefore means things like email authentication sometimes fail. That it will probably work 99 times out of 100 means that 1% of the time it fails — which is unacceptable.

There are ways around this. An anycast system could handle UDP directly and pass all TCP to a centralized server somewhere, for example. This allows UDP at max efficiency while still correctly with the uncommon TCP. The point is, though, that for Dyn to make anycast work requires careful thinking and engineering. It’s not a simple answer.

Security Economics of the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/security_econom_1.html

Brian Krebs is a popular reporter on the cybersecurity beat. He regularly exposes cybercriminals and their tactics, and consequently is regularly a target of their ire. Last month, he wrote about an online attack-for-hire service that resulted in the arrest of the two proprietors. In the aftermath, his site was taken down by a massive DDoS attack.

In many ways, this is nothing new. Distributed denial-of-service attacks are a family of attacks that cause websites and other Internet-connected systems to crash by overloading them with traffic. The “distributed” part means that other insecure computers on the Internet — sometimes in the millions­ — are recruited to a botnet to unwittingly participate in the attack. The tactics are decades old; DDoS attacks are perpetrated by lone hackers trying to be annoying, criminals trying to extort money, and governments testing their tactics. There are defenses, and there are companies that offer DDoS mitigation services for hire.

Basically, it’s a size vs. size game. If the attackers can cobble together a fire hose of data bigger than the defender’s capability to cope with, they win. If the defenders can increase their capability in the face of attack, they win.

What was new about the Krebs attack was both the massive scale and the particular devices the attackers recruited. Instead of using traditional computers for their botnet, they used CCTV cameras, digital video recorders, home routers, and other embedded computers attached to the Internet as part of the Internet of Things.

Much has been written about how the IoT is wildly insecure. In fact, the software used to attack Krebs was simple and amateurish. What this attack demonstrates is that the economics of the IoT mean that it will remain insecure unless government steps in to fix the problem. This is a market failure that can’t get fixed on its own.

Our computers and smartphones are as secure as they are because there are teams of security engineers working on the problem. Companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support such teams because those companies make a huge amount of money, either directly or indirectly, from their software­ — and, in part, compete on its security. This isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

Even worse, most of these devices don’t have any way to be patched. Even though the source code to the botnet that attacked Krebs has been made public, we can’t update the affected devices. Microsoft delivers security patches to your computer once a month. Apple does it just as regularly, but not on a fixed schedule. But the only way for you to update the firmware in your home router is to throw it away and buy a new one.

The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn’t true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.

The market can’t fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don’t care. Their devices were cheap to buy, they still work, and they don’t even know Brian. The sellers of those devices don’t care: they’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: it’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

Of course, this would only be a domestic solution to an international problem. The Internet is global, and attackers can just as easily build a botnet out of IoT devices from Asia as from the United States. Long term, we need to build an Internet that is resilient against attacks like this. But that’s a long time coming. In the meantime, you can expect more attacks that leverage insecure IoT devices.

This essay previously appeared on Vice Motherboard.

Slashdot thread.

Here are some of the things that are vulnerable.

EDITED TO ADD (10/17: DARPA is looking for IoT-security ideas from the private sector.

Internet Disinformation Service for Hire

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/internet_disinf.html

Yet another leaked catalog of Internet attack services, this one specializing in disinformation:

But Aglaya had much more to offer, according to its brochure. For eight to 12 weeks campaigns costing €2,500 per day, the company promised to “pollute” internet search results and social networks like Facebook and Twitter “to manipulate current events.” For this service, which it labelled “Weaponized Information,” Aglaya offered “infiltration,” “ruse,” and “sting” operations to “discredit a target” such as an “individual or company.”

“[We] will continue to barrage information till it gains ‘traction’ & top 10 search results yield a desired results on ANY Search engine,” the company boasted as an extra “benefit” of this service.

Aglaya also offered censorship-as-a-service, or Distributed Denial of Service (DDoS) attacks, for only €600 a day, using botnets to “send dummy traffic” to targets, taking them offline, according to the brochure. As part of this service, customers could buy an add-on to “create false criminal charges against Targets in their respective countries” for a more costly €1 million.

[…]

Some of Aglaya’s offerings, according to experts who reviewed the document for Motherboard, are likely to be exaggerated or completely made-up. But the document shows that there are governments interested in these services, which means there will be companies willing to fill the gaps in the market and offer them.

iOS 9.3.5 Fixes Malware Exploit: Install It, but Backup First!

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/install-ios-935-backup-first/

Update your iPhone or iPad

Apple this week posted important an update to iOS 9. Version 9.3.5 is ready for download from Apple’s servers. Experts advise you to upgrade right away, since it closes a malware security vulnerability. It’s good advice, even if the risk is small to most of us. Make sure to back up your iPhone or iPad first, though. Here’s info on what’s going on and how to protect yourself.

First of all, if there’s one takeaway from the story: Don’t click on a link unless you know what’s inside.

A human rights dissident named Ahmed Mansoor received a strange text message on his iPhone encouraging him to click a link. He didn’t. Instead Mansoor reported it to a security research firm. Researchers think that it was a sophisticated effort to take over Mansoor’s iPhone. The code belongs to a company that makes government spyware.

The security group alerted Apple and didn’t publicize the break until after Apple issued the patch.

What is a zero-day exploit?

The original report says that hackers have combined “zero-day” exploits in iOS to remotely jailbreak a targeted iPhone. “Zero-day” is security shorthand for “this problem hadn’t been patched by the software maker when it was found.” (“Jailbreaking” enables software to be installed on the iPhone that Apple doesn’t allow otherwise.)

The folks at Macworld have the skinny on what’s going on here:

When used together, the exploits allow someone to hijack an iOS device and control or monitor it remotely. Hijackers would have access to the device’s camera and microphone, and could capture audio calls even in otherwise end-to-end secured apps like WhatsApp. They could also grab stored images, tracking movements, and retrieve files.

It’s important to understand that according to the security experts who have examined the code, this particular exploit was targeted to be delivered to a specific person. It was only his general awareness and suspicion something was wrong that prevented the software from getting installed.

In other words, this isn’t like a phishing or a botnet scheme, where the goal is to get as many devices infected as possible. Instead, this was an effort by government actors to target a specific individual.

That means that the risk to you that you might be exploited by this particular problem is relatively low, unless you’re in the crosshairs of a government agency (or unless malware programmers figure out another way to exploit this particular threat for other reasons).

Risk, low. So don’t panic. But the threat is still there, which is why Apple’s pushed the 9.3.5 update and is recommending that everyone who’s running iOS 9 install it promptly.

Should I back up before I update?

Yes! Backup before you make any fundamental changes to your device. It’s a good idea to make sure any essential information is stored safely somewhere else.

You can backup your device using iTunes on the Mac or PC. Use the Lightning cable that came with your iOS device and connect it to an open USB port, then open iTunes to backup. You can also use iCloud Backup, Apple’s cloud-based backups ervice. Here’s a guide to backing up your iPhone or iPad with more details.

In general, we advocate the 3-2-1 Backup Strategy. Always have three copies of your data. One is your local copy (the data on your iPhone or iPad, for example). The second is a local backup (using iTunes on your Mac or PC, for example). The third is a copy stored safely offsite, in case anything happens – the cloud, for example.

iCloud Backup is one option for offsite backup. Backblaze will back up your computer’s iTunes files including that local backup. So if you backup to your computer, then use Backblaze, you’re all set.

How do I update my iPhone or iPad?

Follow these steps to make sure iOS 9.3.5 is installed on your device. First of all, make sure your device is charged and connected to a Wi-Fi network.

  1. Tap the iPhone or iPad’s Home button.
  2. Tap Settings.
  3. Tap General.
  4. Tap Software Update.
  5. The device will check for an update. Your device will automatically restart after it’s done.

How to Update to 9.3.5

What do I do now?

Breathe a sigh of relief now that you have iOS 9.3.5 installed. At least until the next security exploit pops up. Just make sure to backup early and often, to make sure your device and your data is safe.

The post iOS 9.3.5 Fixes Malware Exploit: Install It, but Backup First! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

UFONet – Open Redirect DDoS Tool

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/Hyv3xrsVqxg/

UFONet is an open redirect DDoS tool designed to launch attacks against a target, using insecure redirects in third party web applications, like a botnet. Obviously, only for testing purposes. The tool abuses OSI Layer 7-HTTP to create/manage ‘zombies’ and to conduct different attacks using; GET/POST, multi-threading, proxies, origin spoofing…

Read the full post at darknet.org.uk