Tag Archives: Phishing

New Technique to Hijack Social Media Accounts

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/new_technique_t.html

Access Now has documented it being used against a Twitter user, but it also works against other social media accounts:

With the Doubleswitch attack, a hijacker takes control of a victim’s account through one of several attack vectors. People who have not enabled an app-based form of multifactor authentication for their accounts are especially vulnerable. For instance, an attacker could trick you into revealing your password through phishing. If you don’t have multifactor authentication, you lack a secondary line of defense. Once in control, the hijacker can then send messages and also subtly change your account information, including your username. The original username for your account is now available, allowing the hijacker to register for an account using that original username, while providing different login credentials.

Three news stories.

Spear Phishing Attacks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/spear_phishing_.html

Really interesting research: “Unpacking Spear Phishing Susceptibility,” by Zinaida Benenson, Freya Gassmann, and Robert Landwirth.

Abstract: We report the results of a field experiment where we sent to over 1200 university students an email or a Facebook message with a link to (non-existing) party pictures from a non-existing person, and later asked them about the reasons for their link clicking behavior. We registered a significant difference in clicking rates: 20% of email versus 42.5% of Facebook recipients clicked. The most frequently reported reason for clicking was curiosity (34%), followed by the explanations that the message fit recipient’s expectations (27%). Moreover, 16% thought that they might know the sender. These results show that people’s decisional heuristics are relatively easy to misuse in a targeted attack, making defense especially challenging.

Black Hat presentation on the research.

Tainted Leaks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/tainted_leaks.html

Last year, I wrote about the potential for doxers to alter documents before they leaked them. It was a theoretical threat when I wrote it, but now Citizen Lab has documented this technique in the wild:

This report describes an extensive Russia-linked phishing and disinformation campaign. It provides evidence of how documents stolen from a prominent journalist and critic of Russia was tampered with and then “leaked” to achieve specific propaganda aims. We name this technique “tainted leaks.” The report illustrates how the twin strategies of phishing and tainted leaks are sometimes used in combination to infiltrate civil society targets, and to seed mistrust and disinformation. It also illustrates how domestic considerations, specifically concerns about regime security, can motivate espionage operations, particularly those targeting civil society.

Some notes on Trump’s cybersecurity Executive Order

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/some-notes-on-trumps-cybersecurity.html

President Trump has finally signed an executive order on “cybersecurity”. The first draft during his first weeks in power were hilariously ignorant. The current draft, though, is pretty reasonable as such things go. I’m just reading the plain language of the draft as a cybersecurity expert, picking out the bits that interest me. In reality, there’s probably all sorts of politics in the background that I’m missing, so I may be wildly off-base.

Holding managers accountable

This is a great idea in theory. But government heads are rarely accountable for anything, so it’s hard to see if they’ll have the nerve to implement this in practice. When the next breech happens, we’ll see if anybody gets fired.
“antiquated and difficult to defend Information Technology”

The government uses laughably old computers sometimes. Forces in government wants to upgrade them. This won’t work. Instead of replacing old computers, the budget will simply be used to add new computers. The old computers will still stick around.
“Legacy” is a problem that money can’t solve. Programmers know how to build small things, but not big things. Everything starts out small, then becomes big gradually over time through constant small additions. What you have now is big legacy systems. Attempts to replace a big system with a built-from-scratch big system will fail, because engineers don’t know how to build big systems. This will suck down any amount of budget you have with failed multi-million dollar projects.
It’s not the antiquated systems that are usually the problem, but more modern systems. Antiquated systems can usually be protected by simply sticking a firewall or proxy in front of them.

“address immediate unmet budgetary needs necessary to manage risk”

Nobody cares about cybersecurity. Instead, it’s a thing people exploit in order to increase their budget. Instead of doing the best security with the budget they have, they insist they can’t secure the network without more money.

An alternate way to address gaps in cybersecurity is instead to do less. Reduce exposure to the web, provide fewer services, reduce functionality of desktop computers, and so on. Insisting that more money is the only way to address unmet needs is the strategy of the incompetent.

Use the NIST framework
Probably the biggest thing in the EO is that it forces everyone to use the NIST cybersecurity framework.
The NIST Framework simply documents all the things that organizations commonly do to secure themselves, such run intrusion-detection systems or impose rules for good passwords.
There are two problems with the NIST Framework. The first is that no organization does all the things listed. The second is that many organizations don’t do the things well.
Password rules are a good example. Organizations typically had bad rules, such as frequent changes and complexity standards. So the NIST Framework documented them. But cybersecurity experts have long opposed those complex rules, so have been fighting NIST on them.

Another good example is intrusion-detection. These days, I scan the entire Internet, setting off everyone’s intrusion-detection systems. I can see first hand that they are doing intrusion-detection wrong. But the NIST Framework recommends they do it, because many organizations do it, but the NIST Framework doesn’t demand they do it well.
When this EO forces everyone to follow the NIST Framework, then, it’s likely just going to increase the amount of money spent on cybersecurity without increasing effectiveness. That’s not necessarily a bad thing: while probably ineffective or counterproductive in the short run, there might be long-term benefit aligning everyone to thinking about the problem the same way.
Note that “following” the NIST Framework doesn’t mean “doing” everything. Instead, it means documented how you do everything, a reason why you aren’t doing anything, or (most often) your plan to eventually do the thing.
preference for shared IT services for email, cloud, and cybersecurity
Different departments are hostile toward each other, with each doing things their own way. Obviously, the thinking goes, that if more departments shared resources, they could cut costs with economies of scale. Also obviously, it’ll stop the many home-grown wrong solutions that individual departments come up with.
In other words, there should be a single government GMail-type service that does e-mail both securely and reliably.
But it won’t turn out this way. Government does not have “economies of scale” but “incompetence at scale”. It means a single GMail-like service that is expensive, unreliable, and in the end, probably insecure. It means we can look forward to government breaches that instead of affecting one department affecting all departments.

Yes, you can point to individual organizations that do things poorly, but what you are ignoring is the organizations that do it well. When you make them all share a solution, it’s going to be the average of all these things — meaning those who do something well are going to move to a worse solution.

I suppose this was inserted in there so that big government cybersecurity companies can now walk into agencies, point to where they are deficient on the NIST Framework, and say “sign here to do this with our shared cybersecurity service”.
“identify authorities and capabilities that agencies could employ to support the cybersecurity efforts of critical infrastructure entities”
What this means is “how can we help secure the power grid?”.
What it means in practice is that fiasco in the Vermont power grid. The DHS produced a report containing IoCs (“indicators of compromise”) of Russian hackers in the DNC hack. Among the things it identified was that the hackers used Yahoo! email. They pushed these IoCs out as signatures in their “Einstein” intrusion-detection system located at many power grid locations. The next person that logged into their Yahoo! email was then flagged as a Russian hacker, causing all sorts of hilarity to ensue, such as still uncorrected stories by the Washington Post how the Russians hacked our power-grid.
The upshot is that federal government help is also going to include much government hindrance. They really are this stupid sometimes and there is no way to fix this stupid. (Seriously, the DHS still insists it did the right thing pushing out the Yahoo IoCs).
Resilience Against Botnets and Other Automated, Distributed Threats

The government wants to address botnets because it’s just the sort of problem they love, mass outages across the entire Internet caused by a million machines.

But frankly, botnets don’t even make the top 10 list of problems they should be addressing. Number #1 is clearly “phishing” — you know, the attack that’s been getting into the DNC and Podesta e-mails, influencing the election. You know, the attack that Gizmodo recently showed the Trump administration is partially vulnerable to. You know, the attack that most people blame as what probably led to that huge OPM hack. Replace the entire Executive Order with “stop phishing”, and you’d go further fixing federal government security.

But solving phishing is tough. To begin with, it requires a rethink how the government does email, and how how desktop systems should be managed. So the government avoids complex problems it can’t understand to focus on the simple things it can — botnets.

Dealing with “prolonged power outage associated with a significant cyber incident”

The government has had the hots for this since 2001, even though there’s really been no attack on the American grid. After the Russian attacks against the Ukraine power grid, the issue is heating up.

Nation-wide attacks aren’t really a threat, yet, in America. We have 10,000 different companies involved with different systems throughout the country. Trying to hack them all at once is unlikely. What’s funny is that it’s the government’s attempts to standardize everything that’s likely to be our downfall, such as sticking Einstein sensors everywhere.

What they should be doing is instead of trying to make the grid unhackable, they should be trying to lessen the reliance upon the grid. They should be encouraging things like Tesla PowerWalls, solar panels on roofs, backup generators, and so on. Indeed, rather than industrial system blackout, industry backup power generation should be considered as a source of grid backup. Factories and even ships were used to supplant the electric power grid in Japan after the 2011 tsunami, for example. The less we rely on the grid, the less a blackout will hurt us.

“cybersecurity risks facing the defense industrial base, including its supply chain”

So “supply chain” cybersecurity is increasingly becoming a thing. Almost anything electronic comes with millions of lines of code, silicon chips, and other things that affect the security of the system. In this context, they may be worried about intentional subversion of systems, such as that recent article worried about Kaspersky anti-virus in government systems. However, the bigger concern is the zillions of accidental vulnerabilities waiting to be discovered. It’s impractical for a vendor to secure a product, because it’s built from so many components the vendor doesn’t understand.

“strategic options for deterring adversaries and better protecting the American people from cyber threats”

Deterrence is a funny word.

Rumor has it that we forced China to backoff on hacking by impressing them with our own hacking ability, such as reaching into China and blowing stuff up. This works because the Chinese governments remains in power because things are going well in China. If there’s a hiccup in economic growth, there will be mass actions against the government.

But for our other cyber adversaries (Russian, Iran, North Korea), things already suck in their countries. It’s hard to see how we can make things worse by hacking them. They also have a strangle hold on the media, so hacking in and publicizing their leader’s weird sex fetishes and offshore accounts isn’t going to work either.

Also, deterrence relies upon “attribution”, which is hard. While news stories claim last year’s expulsion of Russian diplomats was due to election hacking, that wasn’t the stated reason. Instead, the claimed reason was Russia’s interference with diplomats in Europe, such as breaking into diplomat’s homes and pooping on their dining room table. We know it’s them when they are brazen (as was the case with Chinese hacking), but other hacks are harder to attribute.

Deterrence of nation states ignores the reality that much of the hacking against our government comes from non-state actors. It’s not clear how much of all this Russian hacking is actually directed by the government. Deterrence polices may be better directed at individuals, such as the recent arrest of a Russian hacker while they were traveling in Spain. We can’t get Russian or Chinese hackers in their own countries, so we have to wait until they leave.

Anyway, “deterrence” is one of those real-world concepts that hard to shoe-horn into a cyber (“cyber-deterrence”) equivalent. It encourages lots of bad thinking, such as export controls on “cyber-weapons” to deter foreign countries from using them.

“educate and train the American cybersecurity workforce of the future”

The problem isn’t that we lack CISSPs. Such blanket certifications devalue the technical expertise of the real experts. The solution is to empower the technical experts we already have.

In other words, mandate that whoever is the “cyberczar” is a technical expert, like how the Surgeon General must be a medical expert, or how an economic adviser must be an economic expert. For over 15 years, we’ve had a parade of non-technical people named “cyberczar” who haven’t been experts.

Once you tell people technical expertise is valued, then by nature more students will become technical experts.

BTW, the best technical experts are software engineers and sysadmins. The best cybersecurity for Windows is already built into Windows, whose sysadmins need to be empowered to use those solutions. Instead, they are often overridden by a clueless cybersecurity consultant who insists on making the organization buy a third-party product instead that does a poorer job. We need more technical expertise in our organizations, sure, but not necessarily more cybersecurity professionals.

Conclusion

This is really a government document, and government people will be able to explain it better than I. These are just how I see it as a technical-expert who is a government-outsider.

My guess is the most lasting consequential thing will be making everyone following the NIST Framework, and the rest will just be a lot of aspirational stuff that’ll be ignored.

Some notes on #MacronLeak

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/some-notes-on-macronleak.html

Tonight (Friday May 5 2017) hackers dumped emails (and docs) related to French presidential candidate Emmanuel Macron. He’s the anti-Putin candidate running against the pro-Putin Marin Le Pen. I thought I’d write up some notes.

Are they Macron’s emails?

No. They are e-mails from members of his staff/supporters, namely Alain Tourret, Pierre Person, Cedric O??, Anne-Christine Lang, and Quentin Lafay.
There are some documents labeled “Macron” which may have been taken from his computer, cloud drive — his own, or an assistant.

Who done it?
Obviously, everyone assumes that Russian hackers did it, but there’s nothing (so far) that points to anybody in particular.
It appears to be the most basic of phishing attacks, which means anyone could’ve done it, including your neighbor’s pimply faced teenager.

Update: Several people [*] have pointed out Trend Micro reporting that Russian/APT28 hackers were targeting Macron back on April 24. Coincidentally, this is also the latest that emails appear in the dump.

What’s the hacker’s evil plan?
Everyone is proposing theories about the hacker’s plan, but the most likely answer is they don’t have one. Hacking is opportunistic. They likely targeted everyone in the campaign, and these were the only victims they could hack. It’s probably not the outcome they were hoping for.
But since they’ve gone through all the work, it’d be a shame to waste it. Thus, they are likely releasing the dump not because they believe it will do any good, but because it’ll do them no harm. It’s a shame to waste all the work they put into it.
If there’s any plan, it’s probably a long range one, serving notice that any political candidate that goes against Putin will have to deal with Russian hackers dumping email.
Why now? Why not leak bits over time like with Clinton?

France has a campaign blackout starting tonight at midnight until the election on Sunday. Thus, it’s the perfect time to leak the files. Anything salacious, or even rumors of something bad, will spread viraly through Facebook and Twitter, without the candidate or the media having a good chance to rebut the allegations.
The last emails in the logs appear to be from April 24, the day after the first round vote (Sunday’s vote is the second, runoff, round). Thus, the hackers could’ve leaked this dump any time in the last couple weeks. They chose now to do it.
Are the emails verified?
Yes and no.
Yes, we have DKIM signatures between people’s accounts, so we know for certain that hackers successfully breached these accounts. DKIM is an anti-spam method that cryptographically signs emails by the sending domain (e.g. @gmail.com), and thus, can also verify the email hasn’t been altered or forged.
But no, when a salacious email or document is found in the dump, it’ll likely not have such a signature (most emails don’t), and thus, we probably won’t be able to verify the scandal. In other words, the hackers could have altered or forged something that becomes newsworthy.
What are the most salacious emails/files?

I don’t know. Before this dump, hackers on 4chan were already making allegations that Macron had secret offshore accounts (debunked). Presumably we need to log in to 4chan tomorrow for them to point out salacious emails/files from this dump.

Another email going around seems to indicate that Alain Tourret, a member of the French legislature, had his assistant @FrancoisMachado buy drugs online with Bitcoin and had them sent to his office in the legislature building. The drugs in question, 3-MMC, is a variant of meth that might be legal in France. The emails point to a tracking number which looks legitimate, at least, that a package was indeed shipped to that area of Paris. There is a bitcoin transaction that matches the address, time, and amount specified in the emails. Some claim these drug emails are fake, but so far, I haven’t seen any emails explaining why they should be fake. On the other hand, there’s nothing proving they are true (no DKIM sig), either.

Some salacious emails might be obvious, but some may take people with more expertise to find. For example, one email is a receipt from Uber (with proper DKIM validation) that shows the route that “Quenten” took on the night of the first round election. Somebody clued into the French political scene might be able to figure out he’s visiting his mistress, or something. (This is hypothetical — in reality, he’s probably going from one campaign rally to the next).

What’s the Macron camp’s response?

They have just the sort of response you’d expect.
They claim some of the documents/email are fake, without getting into specifics. They claim that information is needed to be understand in context. They claim that this was a “massive coordinated attack”, even though it’s something that any pimply faced teenager can do. They claim it’s an attempt to destabilize democracy. They call upon journalists to be “responsible”.

Defense against Doxing

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/defense_against.html

A decade ago, I wrote about the death of ephemeral conversation. As computers were becoming ubiquitous, some unintended changes happened, too. Before computers, what we said disappeared once we’d said it. Neither face-to-face conversations nor telephone conversations were routinely recorded. A permanent communication was something different and special; we called it correspondence.

The Internet changed this. We now chat by text message and e-mail, on Facebook and on Instagram. These conversations — with friends, lovers, colleagues, fellow employees — all leave electronic trails. And while we know this intellectually, we haven’t truly internalized it. We still think of conversation as ephemeral, forgetting that we’re being recorded and what we say has the permanence of correspondence.

That our data is used by large companies for psychological manipulation ­– we call this advertising –­ is well known. So is its use by governments for law enforcement and, depending on the country, social control. What made the news over the past year were demonstrations of how vulnerable all of this data is to hackers and the effects of having it hacked, copied, and then published online. We call this doxing.

Doxing isn’t new, but it has become more common. It’s been perpetrated against corporations, law firms, individuals, the NSA and — just this week — the CIA. It’s largely harassment and not whistleblowing, and it’s not going to change anytime soon. The data in your computer and in the cloud are, and will continue to be, vulnerable to hacking and publishing online. Depending on your prominence and the details of this data, you may need some new strategies to secure your private life.

There are two basic ways hackers can get at your e-mail and private documents. One way is to guess your password. That’s how hackers got their hands on personal photos of celebrities from iCloud in 2014.

How to protect yourself from this attack is pretty obvious. First, don’t choose a guessable password. This is more than not using “password1” or “qwerty”; most easily memorizable passwords are guessable. My advice is to generate passwords you have to remember by using either the XKCD scheme or the Schneier scheme, and to use large random passwords stored in a password manager for everything else.

Second, turn on two-factor authentication where you can, like Google’s 2-Step Verification. This adds another step besides just entering a password, such as having to type in a one-time code that’s sent to your mobile phone. And third, don’t reuse the same password on any sites you actually care about.

You’re not done, though. Hackers have accessed accounts by exploiting the “secret question” feature and resetting the password. That was how Sarah Palin’s e-mail account was hacked in 2008. The problem with secret questions is that they’re not very secret and not very random. My advice is to refuse to use those features. Type randomness into your keyboard, or choose a really random answer and store it in your password manager.

Finally, you also have to stay alert to phishing attacks, where a hacker sends you an enticing e-mail with a link that sends you to a web page that looks almost like the expected page, but which actually isn’t. This sort of thing can bypass two-factor authentication, and is almost certainly what tricked John Podesta and Colin Powell.

The other way hackers can get at your personal stuff is by breaking in to the computers the information is stored on. This is how the Russians got into the Democratic National Committee’s network and how a lone hacker got into the Panamanian law firm Mossack Fonseca. Sometimes individuals are targeted, as when China hacked Google in 2010 to access the e-mail accounts of human rights activists. Sometimes the whole network is the target, and individuals are inadvertent victims, as when thousands of Sony employees had their e-mails published by North Korea in 2014.

Protecting yourself is difficult, because it often doesn’t matter what you do. If your e-mail is stored with a service provider in the cloud, what matters is the security of that network and that provider. Most users have no control over that part of the system. The only way to truly protect yourself is to not keep your data in the cloud where someone could get to it. This is hard. We like the fact that all of our e-mail is stored on a server somewhere and that we can instantly search it. But that convenience comes with risk. Consider deleting old e-mail, or at least downloading it and storing it offline on a portable hard drive. In fact, storing data offline is one of the best things you can do to protect it from being hacked and exposed. If it’s on your computer, what matters is the security of your operating system and network, not the security of your service provider.

Consider this for files on your own computer. The more things you can move offline, the safer you’ll be.

E-mail, no matter how you store it, is vulnerable. If you’re worried about your conversations becoming public, think about an encrypted chat program instead, such as Signal, WhatsApp or Off-the-Record Messaging. Consider using communications systems that don’t save everything by default.

None of this is perfect, of course. Portable hard drives are vulnerable when you connect them to your computer. There are ways to jump air gaps and access data on computers not connected to the Internet. Communications and data files you delete might still exist in backup systems somewhere — either yours or those of the various cloud providers you’re using. And always remember that there’s always another copy of any of your conversations stored with the person you’re conversing with. Even with these caveats, though, these measures will make a big difference.

When secrecy is truly paramount, go back to communications systems that are still ephemeral. Pick up the telephone and talk. Meet face to face. We don’t yet live in a world where everything is recorded and everything is saved, although that era is coming. Enjoy the last vestiges of ephemeral conversation while you still can.

This essay originally appeared in the Washington Post.

Is ‘aqenbpuu’ a bad password?

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/01/is-aqenbpuu-bad-password.html

Press secretary Sean Spicer has twice tweeted a random string, leading people to suspect he’s accidentally tweeted his Twitter password. One of these was ‘aqenbpuu’, which some have described as a “shitty password“. Is is actually bad?

No. It’s adequate. Not the best, perhaps, but not “shitty”.

It depends upon your threat model. The common threats are password reuse and phishing, where the strength doesn’t matter. When the strength does matter is when Twitter gets hacked and the password hashes stolen.

Twitter uses the bcrypt password hashing technique, which is designed to be slow. A typical desktop with a GPU can only crack bcrypt passwords at a rate of around 321 hashes-per-second. Doing the math (26 to the power of 8, divided by 321, divided by one day) it will take 20 years for this desktop to crack the password.

That’s not a good password. A botnet with thousands of desktops, or a somebody willing to invest thousands of dollars on a supercomputer or cluster like Amazon’s, can crack that password in a few days.

But, it’s not a bad password, either. A hack of a Twitter account like this would be a minor event. It’s not worth somebody spending that much resources hacking. Security is a tradeoff — you protect a ton of gold with Ft. Knox like protections, but you wouldn’t invest the same amount protecting a ton of wood. The same is true with passwords — as long as you don’t reuse your passwords, or fall victim to phishing, eight lower case characters is adequate.

This is especially true if using two-factor authentication, in which case, such a password is more than adequate.

I point this out because the Trump administration is bad, and Sean Spicer is a liar. Our criticism needs to be limited to things we can support, such as the DC metro ridership numbers (which Spicer has still not corrected). Every time we weakly criticize the administration on things we cannot support, like “shitty passwords”, we lessen our credibility. We look more like people who will hate the administration no matter what they do, rather than people who are standing up for principles like “honesty”.


The numbers above aren’t approximations. I actually generated a bcrypt hash and attempted to crack it in order to benchmark how long this would take. I’ll describe the process here.

First of all, I installed the “PHP command-line”. While older versions of PHP used MD5 for hashing, the newer versions use Bcrypt.

# apt-get install php5-cli

I then created a PHP program that will hash the password:

I actually use it three ways. The first way is to hash a small password “ax”, one short enough that the password cracker will actually succeed in hashing. The second is to hash the password with PHP defaults, which is what I assume Twitter is using. The third is to increase the difficulty level, in case Twitter has increased the default difficulty level at all in order to protect weak passwords.

I then ran the PHP script, producing these hashes:
$ php spicer.php
$2y$10$1BfTonhKWDN23cGWKpX3YuBSj5Us3eeLzeUsfylemU0PK4JFr4moa
$2y$10$DKe2N/shCIU.jSfSr5iWi.jH0AxduXdlMjWDRvNxIgDU3Cyjizgzu
$2y$15$HKcSh42d8amewd2QLvRWn.wttbz/B8Sm656Xh6ZWQ4a0u6LZaxyH6

I ran the first one through the password cracking known as “Hashcat”. Within seconds, I get the correct password “ax”. Thus, I know Hashcat is correctly cracking the password. I actually had a little doubt, because the documentation doesn’t make it clear that the Bcrypt algorithm the program supports is the same as the one produced by PHP5.

I run the second one, and get the following speed statistics:

As you can seem, I’m brute-forcing an eight character password that’s all lower case (-a 3 ?l?l?l?l?l?l?l?l). Checking the speed as it runs, it seems pretty consistently slightly above 300 hashes/second. It’s not perfect — it keeps warning that it’s slow. This is because the GPU acceleration works best if trying many password hashes at a time.

I tried the same sort of setup using John-the-Ripper in incremental mode. Whereas Hashcat uses the GPU, John uses the CPU. I have a 6-core Broadwell CPU, so I ran John-the-Ripper with 12 threads.

Curiously, it’s slightly faster, at 347 hashes-per-second on the CPU rather than 321 on the GPU.

Using the stronger work factor (the third hash I produced above), I get about 10 hashes/second on John, and 10 on Hashcat as well. It takes over a second to even generate the hash, meaning it’s probably too aggressive for a web server like Twitter to have to do that much work every time somebody logs in, so I suspect they aren’t that aggressive.

Would a massive IoT botnet help? Well, I tried out John on the Raspbery PI 3. With the same settings cracking the password (at default difficulty), I got the following:

In other words, the RPi is 35 times slower than my desktop computer at this task.

The RPi v3 has four cores and about twice the clock speed of IoT devices. So the typical IoT device would be 250 times slower than a desktop computer. This gives a good approximation of the difference in power.


So there’s this comment:

Yes, you can. So I did, described below.

Okay, so first you need to use the “Node Package Manager” to install bcrypt. The name isn’t “bcrypt”, which refers to a module that I can’t get installed on any platform. Instead, you want “bcrypt-nodejs”.

# npm install bcrypt-nodejs
bcrypt-nodejs@0.0.3 node_modules\bcrypt-nodejs

On all my platforms (Windows, Ubuntu, Raspbian) this is already installed. So now you just create a script spicer.js:

var bcrypt = require(“bcrypt-nodejs”);
console.log(bcrypt.hashSync(“aqenbpuu”);

This produces the following hash, which has the same work factor as the one generated by the PHP script above:

$2a$10$Ulbm//hEqMoco8FLRI6k8uVIrGqipeNbyU53xF2aYx3LuQ.xUEmz6

Hashcat and John then are the same speed cracking this one as the other one. The first characters $2a$ define the hash type (bcrypt). Apparently, there’s a slightly difference between that and $2y$, but that doesn’t change the analysis.


The first comment below questions the speed I get, because running Hashcat in benchmark mode, it gets much higher numbers for bcrypt.

This is actually normal, due to different iteration counts.

A bcrypt hash includes an iteration count (or more precisely, the logarithm of an iteration count). It repeats the hash that number of times. That’s what the $10$ means in the hash:

$2y$10$………

The Hashcat benchmark uses the number 5 (actually, 2^5, or 32 times) as it’s count. But the default iteration count produced by PHP and NodeJS is 10 (2^10, or 1024 times). Thus, you’d expect these hashes to run at a speed 32 times slower.

And indeed, that’s what I get. Running the benchmark on my machine, I get the following output:

Hashtype: bcrypt, Blowfish(OpenBSD)
Speed.Dev.#1…..:    10052 H/s (82.28ms)

Doing the math, dividing 1052 hashes/sec by 321 hashes/sec, I get 31.3. This is close enough to 32, the expected answer, giving the variabilities of thermal throttling, background activity on the machine, and so on.

Googling, this appears to be a common complaint. It’s be nice if it said something like ‘bcrypt[speed=5]’  or something to make this clear.


Spicer tweeted another password, “n9y25ah7”. Instead of all lower-case, this is lower-case plus digits, or 36 to the power of 8 combinations, so it’s about 13 times harder (36/26)^8, which is roughly in the same range.


BTW, there are two notes I want to make.

The first is that a good practical exercise first tries to falsify the theory. In this case, I deliberately tested whether Hashcat and John were actually cracking the right password. They were. They both successfully cracked the two character password “ax”. I also tested GPU vs. CPU. Everyone knows that GPUs are faster for password cracking, but also everyone knows that Bcrypt is designed to be hard to run on GPUs.

The second note is that everything here is already discussed in my study guide on command-lines [*]. I mention that you can run PHP on the command-line, and that you can use Hashcat and John to crack passwords. My guide isn’t complete enough to be an explanation for everything, but it’s a good discussion of where you need to end up.

The third note is that I’m not a master of any of these tools. I know enough about these tools to Google the answers, not to pull them off the top of my head. Mastery is impossible, don’t even try it. For example, bcrypt is one of many hashing algorithms, and has all by itself a lot of complexity, such as the difference between $2a$ and $2y$, or the the logarithmic iteration count. I ignored the issue of salting bcrypt altogether. So what I’m saying is that the level of proficiency you want is to be able to google the steps in solving a problem like this, not actually knowing all this off the top of your head.

Dear Obama, From Infosec

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/01/dear-obama-from-infosec.html

Dear President Obama:

We are more than willing to believe Russia was responsible for the hacked emails/records that influenced our election. We believe Russian hackers were involved. Even if these hackers weren’t under the direct command of Putin, we know he could put a stop to such hacking if he chose. It’s like harassment of journalists and diplomats. Putin encourages a culture of thuggery that attacks opposition, without his personal direction, but with his tacit approval.

Your lame attempts to convince us of what we already agree with has irretrievably damaged your message.

Instead of communicating with the America people, you worked through your typical system of propaganda, such as stories in the New York Times quoting unnamed “senior government officials”. We don’t want “unnamed” officials — we want named officials (namely you) who we can pin down and question. When you work through this system of official leaks, we believe you have something to hide, that the evidence won’t stand on its own.

We still don’t believe the CIA’s conclusions because we don’t know, precisely, what those conclusions are. Are they derived purely from companies like FireEye and CrowdStrike based on digital forensics? Or do you have spies in Russian hacker communities that give better information? This is such an important issue that it’s worth degrading sources of information in order to tell us, the American public, the truth.

You had the DHS and US-CERT issue the “GRIZZLY-STEPPE”[*] report “attributing those compromises to Russian malicious cyber activity“. It does nothing of the sort. It’s full of garbage. It contains signatures of viruses that are publicly available, used by hackers around the world, not just Russia. It contains a long list of IP addresses from perfectly normal services, like Tor, Google, Dropbox, Yahoo, and so forth.

Yes, hackers use Yahoo for phishing and malvertising. It doesn’t mean every access of Yahoo is an “Indicator of Compromise”.

For example, I checked my web browser [chrome://net-internals/#dns] and found that last year on November 20th, it accessed two IP addresses that are on the Grizzley-Steppe list:

No, this doesn’t mean I’ve been hacked. It means I just had a normal interaction with Yahoo. It means the Grizzley-Steppe IoCs are garbage.

If your intent was to show technical information to experts to confirm Russia’s involvement, you’ve done the precise opposite. Grizzley-Steppe proves such enormous incompetence that we doubt all the technical details you might have. I mean, it’s possible that you classified the important details and de-classified the junk, but even then, that junk isn’t worth publishing. There’s no excuse for those Yahoo addresses to be in there, or the numerous other problems.

Among the consequences is that Washington Post story claiming Russians hacked into the Vermont power grid. What really happened is that somebody just checked their Yahoo email, thereby accessing one of the same IP addresses I did. How they get from the facts (one person accessed Yahoo email) to the story (Russians hacked power grid) is your responsibility. This misinformation is your fault.

You announced sanctions for the Russian hacking [*]. At the same time, you announced sanctions for Russian harassment of diplomatic staff. These two events are confused in the press, with most stories reporting you expelled 35 diplomats for hacking, when that appears not to be the case.

Your list of individuals/organizations is confusing. It makes sense to name the GRU, FSB, and their officers. But why name “ZorSecurity” but not sole proprietor “Alisa Esage Shevchenko”? It seems a minor target, and you give no information why it was selected. Conversely, you ignore the APT28/APT29 Dukes/CozyBear groups that feature so prominently in your official leaks. You also throw in a couple extra hackers, for finance hacks rather than election hacks. Again, this causes confusion in the press about exactly who you are sanctioning and why. It seems as slipshod as the DHS/US-CERT report.

Mr President, you’ve got two weeks left in office. Russia’s involvement is a huge issue, especially given President-Elect Trump’s pro-Russia stance. If you’ve got better information than this, I beg you to release it. As it stands now, all you’ve done is support Trump’s narrative, making this look like propaganda — and bad propaganda at that. Give us, the infosec/cybersec community, technical details we can look at, analyze, and confirm.

Regards,
Infosec

Data Storage Disasters SMBs Should Avoid

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/data-storage-disasters-smbs-avoid/

Avoid Data Disasters

No one wants to get caught off guard when disaster strikes. And disasters are kind of inevitable, typically when you least expect them. Forewarned is forearmed. Here are five data storage disasters just waiting to happen to small to medium-sized businesses. We also offer some practical advice for how to avoid them.

Not Knowing Where Your Data Is

Data scatter is a big problem even in small organizations. Some data may be stored in the cloud, some may be on local machines, some may be on servers. Two-thirds of all corporate data exists outside the traditional data center. Make sure you know where your data is and how to protect it.

Conduct a data assessment to find out where your data lives. That includes customer records, financial and compliance data, application and server software, anything else necessary to keep your doors open. Know how data is used. Identify high-priority and high-value data to your organization.

Also understand that not everything is necessary to keep on-hand. Having redundancy and systems in place to retrieve every single bit of data is costly. Be wary of implementation issues that can create headaches, like time to restore. Separate out what’s absolutely necessary from that which would be nice to have, and that which is redundant and rebuildable.

Not Protecting Against Malware

Data breaches caused by malware infestations – especially ransomware – are on the rise. Ransomware encrypts an infected computer’s hard drive, locking you out. Unless you pay up using a cryptocurrency like Bitcoin, you’re locked out of your data with no way to restore it (with a backup).

Some organizations have paid hackers tens of thousands of dollars to unlock systems that have been taken down by ransomware. Even we at Backblaze have been affected by ransomware (having a recent backup got us out of that pickle). Even plain old malware which hijacks web browser search fields or injects advertisements causes problems that cost you time and money to fix.

Sure, you can disinfect individually affected machines, but when it happens to an entire organization it can be crippling. What’s more, any way you slice it, it wastes employee productivity, time and resources.

Use a multi-point strategy to combat malware that combines user education with best security practices. Help users discriminate between legitimate inbound emails and phishing attempts, for example. Make them wary of connecting Wi-Fi enabled devices on unsecured networks (or disable that capability altogether). Force periodic password changes. Use Mobile Device Management (MDM) tools to update remote machines and disable them if they’re stolen or lost.

Installing good anti-malware software is crucial, but endpoint security on user computers shouldn’t be the only proactive defense. If you take care of more than a handful of computers, save time and resources by using apps that centralize anti-malware software updates and malware definition file distribution.

Besides users, servers also need to be protected from malware. Also, update network gear with firmware updates to help maintain security. Make sure that passwords on those devices are changed periodically, as well.

Not Having A Disaster Recovery Plan

As we said at the outset, forewarned is forearmed. Create a written disaster recovery plan (stored safely if you need to retrieve it) that covers all possible contingencies. Think through the threats your business faces: Human error, malfeasance, natural disasters, theft, fire, device or component failure may be some of the things you should be thinking about.

Once you’ve assessed the threats, try to evaluate the actual risks. Being attacked by an angry grizzly bear is certainly a threat, but unless you’re in the Kodiak wilderness, it’s not a plausible risk. Conversely, if your business is located on a floodplain, it might be good to have a contingency in place for the next time the river nearby crests its banks.

Is your IT disaster recovery plan focused just specifically on one part of your business operations, like your server room or data center? What’s your plan for the laptop and desktop computers, handheld devices and other gear used by your employees? Do you have system images in place to quickly restore computers? Can you run some systems as virtual machines in a pinch?

Once you have plans in place, the important thing is to test them periodically. It’ll help you work out implementation problems beforehand, so when disaster strikes, your organization can still move like a well-oiled machine.

Not Using Encryption

Data theft is such a pernicious problem these days, you need to use every safeguard you can manage to protect the integrity of your data and its safety.

Someone could hack into your systems and steal information, or a careless employee can leave an unguarded laptop on the table at Starbucks. Any time your data is exposed or could be exposed to outside threats, there should be some inherent safeguard to protect it. Encryption can help.

macOS, Windows, and modern Linux distributions support full-disk encryption. It’s FileVault on the Mac, and BitLocker in Windows. Traveling executives, salespeople with laptops, field technicians or anyone else who takes sensitive data offsite are good encryption candidates. Anyone in-house who handles customer records or sensitive business intelligence should also use encryption wherever practical. Make sure that you keep a (secure) record of the encryption keys needed to decrypt any protected systems to avoid data recovery problems down the road.

Encrypting endpoint data is important, but so is encrypting data in transit. If you’re regularly backing up to the cloud or using online file sync services, make sure they support encryption to protect your data (all Backblaze backup products support encryption).

Not Having A Recent Backup

Having a good backup strategy in place is crucial to being able to keep your business running. Develop a backup strategy that protects all of your critical data, and automates it as much as possible to run on a schedule.

The 3-2-1 Backup Strategy is a good place to start: Three copies of data – live, backup and offsite. User systems with important data should be backed up, as should servers and any other computers needed to run the business. One backup should be stored locally for easy recovery, and one copy of the backup should be stored offsite. This is where a cloud service (like Backblaze for Business, or for server and NAS systems, B2 Cloud Storage) can come in really handy. Just make sure to observe safe data handling procedures (like encryption, as mentioned above) to keep everything in your control.

This is a good starting point for a discussion within your organization about how to protect yourselves from data loss. If you have questions or comments, please let us know!

The post Data Storage Disasters SMBs Should Avoid appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

"From Putin with Love" – a novel by the New York Times

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/12/from-putin-with-love-novel-by-new-york.html

In recent weeks, the New York Times has written many stories on Russia’s hacking of the Trump election. This front page piece [*] alone takes up 9,000 words. Combined, the NYTimes coverage on this topic exceeds the length of a novel. Yet, for all this text, the number of verifiable facts also equals that of a novel, namely zero. There’s no evidence this was anything other than an undirected, Anonymous-style op based on a phishing campaign.

The question that drives us

It’s not that Russia isn’t involved, it’s that the exact nature of their involvement is complicated. Just because the hackers live in Russia doesn’t automatically mean their attacks are directed by the government.

It’s like the recent Islamic terrorist attacks in Europe and America. Despite ISIS claiming credit, and the perpetrators crediting ISIS, we are loathe to actually blame the attacks directly on ISIS. Overwhelmingly, it’s individuals who finance and plan their attacks, with no ISIS organizational involvement other than inspiration.

The same goes for Russian hacks. The Russian hacker community is complicated. There are lots of actors with various affiliations with the government. They are almost always nationalistic, almost always pro-Putin. There are many individuals and groups who act to the benefit of Putin/Russia with no direct affiliation with the government. Others do have ties with the government, but these are often informal relationships, sustained by patronage and corruption.

Evidence tying Russian attacks to the Russian government is thus the most important question of all — and it’s one that the New York Times is failing to answer. The fewer facts they have, the more they fill the void with vast amounts of verbiage.

Sustaining the narrative

Here’s a trick when reading New York Times articles: when they switch to passive voice, they are covering up a lie. An example is this paragraph from the above story [*]:

The Russians were also quicker to turn their attacks to political purposes. A 2007 cyberattack on Estonia, a former Soviet republic that had joined NATO, sent a message that Russia could paralyze the country without invading it. The next year cyberattacks were used during Russia’s war with Georgia.

Normally, editors would switch this to the active voice, or:

The next year, Russia used cyberattacks in their war against Georgia.

But that would be factually wrong. Yes, cyberattacks happened during the conflicts with Estonia and Georgia, but the evidence in both cases points to targets and tools going viral on social media and web forums. It was the people who conducted the attacks, not the government. Whether it was the government who encouraged the people is the big question — to which we have no answer. Since the NYTimes has no evidence pointing to the Russian government, they switch to the passive voice, hoping you’ll assume they meant the government was to blame.

It’s a clear demonstration that the NYTimes is pushing a narrative, rather than reporting just the facts allowing you to decide for yourself.

Tropes and cliches

The NYTimes story is dominated by cliches or “tropes”.

One such trope is how hackers are always “sophisticated”, which leads to the conclusion they must be state-sponsored, not simple like the Anonymous collective. Amusingly, the New York Times tries to give two conflicting “sophisticated” narratives at once. Their article [*] has a section titled “Honing Stealthy Tactics”, which ends with describing the attacks as “brazen”, full of “boldness”. In other words, sophisticated Russian hackers are marked by “brazen stealthiness”, a contradiction in terms. In reality, the DNC/DCCC/Podesta attacks were no more sophisticated than any other Anonymous attack, such as the one against Stratfor.

A related trope is the sophistication of defense. For example, the NYTimes describes [*] how the DNC is a non-profit that could not afford “the most advanced systems in place” to stop phishing emails. After the hacks, they installed the “robust set of monitoring tools”. This trope imagines there’s a magic pill that victims can use to defend themselves against hackers. Experts know this isn’t how cybersecurity works — the amount of money spent, or the advancement of technology, has little impact on an organization’s ability to defend itself.

Another trope is the word “target” that imagines that every effect from a hacker was the original intention. In other words, it’s the trope that tornados target trailer parks. As part of the NYTimes “narrative” is this story that “House candidates were also targets of Russian hacking” [*]. This is post-factual fake-news. Guccifer2.0 targeted the DCCC, not individual House candidates. Sure, at the request of some bloggers, Guccifer2.0 release part of their treasure trove for some specific races, but the key here is the information withheld, not the information released. Guccifer2.0 made bloggers beg for it, dribbling out bits at a time, keeping themselves in the news, wrapped in an aura of mysteriousness. If their aim was to influence House races, they’d’ve dumped info on all the races.

In other words, the behavior is that of an Anonymous-style hacker which the NYTimes twists into behavior of Russian intelligence.

The word “trope” is normally applied to fiction. When the NYTimes devolves into hacking tropes, like the “targets” of “sophisticated” hackers, you know their news story is fiction, too.

Anonymous government officials

In the end, the foundation of the NYTimes narrative relies upon leaked secret government documents and quotes by anonymous government officials [*]. This is otherwise known as “propaganda”.

The senior government officials are probably the Democrat senators who were briefed by the CIA. These senators leak their version of the CIA briefing, cherry picking the bits that support their story, removing the nuanced claims that were undoubtedly part of the original document.

It’s what the Society of Professional Journalists call the “Washington Game“. Everyone knows how this game is played. That’s why Marcy Wheeler (@emptywheel) [*] and Glenn Greenwald (@ggreenwald) [*] dissected that NYTimes piece. They are both as anti-Trump/anti-Russia as they come, so it’s not their political biases that lead them to challenge that piece. Instead, it’s their knowledge of what bad journalism looks like that motivated their criticisms.

If the above leaks weren’t authorized by Obama, the administration would be announcing an investigation into who is leaking major secrets. Thus, we know the leaks were “authorized”. Obama’s willingness to release the information unofficially, but not officially, means there are holes in it somewhere. There’s something he’s hiding, covering up. Otherwise, he’d have a press conference and field questions from reporters on the topic.

Conclusion

The issue of Russia’s involvement in the election is so important that we should demand real facts, real statements from the government that we can question and challenge. It’s too important to leave up to propaganda. If Putin is involved, we deserve to understand it, and not simply get the “made for TV” version given us by the NYTimes.

Propaganda is what we have here. The NYTimes has written a novel that delivers the message while protecting the government from being questioned. Facts are replaced with distorted narrative, worn tropes, and quotes from anonymous government officials.

The facts we actually see is an attack no more sophisticated than those conducted by LulzSec and Anonymous. We see an attack that is disorganized and opportunistic, exactly what we’d expect from an Anonymous-style attack. Putin’s regime may be involved, and they may have a plan, but the current evidence looks like casual hackers, not professional hackers working for an intelligence service.

This artsy stock photo of FSB headquarters is not evidence.

Note: many ideas in this piece come from a discussion with a friend who doesn’t care to be credited

That "Commission on Enhancing Cybersecurity" is absurd

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/12/that-commission-on-enhancing.html

An Obama commission has publish a report on how to “Enhance Cybersecurity”. It’s promoted as having been written by neutral, bipartisan, technical experts. Instead, it’s almost entirely dominated by special interests and the Democrat politics of the outgoing administration.

In this post, I’m going through a random list of some of the 53 “action items” proposed by the documents. I show how they are policy issues, not technical issues. Indeed, much of the time the technical details are warped to conform to special interests.

IoT passwords

The recommendations include such things as Action Item 2.1.4:

Initial best practices should include requirements to mandate that IoT devices be rendered unusable until users first change default usernames and passwords. 

This recommendation for changing default passwords is repeated many times. It comes from the way the Mirai worm exploits devices by using hardcoded/default passwords.

But this is a misunderstanding of how these devices work. Take, for example, the infamous Xiongmai camera. It has user accounts on the web server to control the camera. If the user forgets the password, the camera can be reset to factory defaults by pressing a button on the outside of the camera.

But here’s the deal with security cameras. They are placed at remote sites miles away, up on the second story where people can’t mess with them. In order to reset them, you need to put a ladder in your truck and drive 30 minutes out to the site, then climb the ladder (an inherently dangerous activity). Therefore, Xiongmai provides a RESET.EXE utility for remotely resetting them. That utility happens to connect via Telnet using a hardcoded password.

The above report misunderstands what’s going on here. It sees Telnet and a hardcoded password, and makes assumptions. Some people assume that this is the normal user account — it’s not, it’s unrelated to the user accounts on the web server portion of the device. Requiring the user to change the password on the web service would have no effect on the Telnet service. Other people assume the Telnet service is accidental, that good security hygiene would remove it. Instead, it’s an intended feature of the product, to remotely reset the device. Fixing the “password” issue as described in the above recommendations would simply mean the manufacturer would create a different, custom backdoor that hackers would eventually reverse engineer, creating MiraiV2 botnet. Instead of security guides banning backdoors, they need to come up with standard for remote reset.

That characterization of Mirai as an IoT botnet is wrong. Mirai is a botnet of security cameras. Security cameras are fundamentally different from IoT devices like toasters and fridges because they are often exposed to the public Internet. To stream video on your phone from your security camera, you need a port open on the Internet. Non-camera IoT devices, however, are overwhelmingly protected by a firewall, with no exposure to the public Internet. While you can create a botnet of Internet cameras, you cannot create a botnet of Internet toasters.

The point I’m trying to demonstrate here is that the above report was written by policy folks with little grasp of the technical details of what’s going on. They use Mirai to justify several of their “Action Items”, none of which actually apply to the technical details of Mirai. It has little to do with IoT, passwords, or hygiene.

Public-private partnerships

Action Item 1.2.1: The President should create, through executive order, the National Cybersecurity Private–Public Program (NCP 3 ) as a forum for addressing cybersecurity issues through a high-level, joint public–private collaboration.

We’ve had public-private partnerships to secure cyberspace for over 20 years, such as the FBI InfraGuard partnership. President Clinton’s had a plan in 1998 to create a public-private partnership to address cyber vulnerabilities. President Bush declared public-private partnerships the “cornerstone of his 2003 plan to secure cyberspace.

Here we are 20 years later, and this document is full of new naive proposals for public-private partnerships There’s no analysis of why they have failed in the past, or a discussion of which ones have succeeded.

The many calls for public-private programs reflects the left-wing nature of this supposed “bipartisan” document, that sees government as a paternalistic entity that can help. The right-wing doesn’t believe the government provides any value in these partnerships. In my 20 years of experience with government private-partnerships in cybersecurity, I’ve found them to be a time waster at best and at worst, a way to coerce “voluntary measures” out of companies that hurt the public’s interest.

Build a wall and make China pay for it

Action Item 1.3.1: The next Administration should require that all Internet-based federal government services provided directly to citizens require the use of appropriately strong authentication.

This would cost at least $100 per person, for 300 million people, or $30 billion. In other words, it’ll cost more than Trump’s wall with Mexico.

Hardware tokens are cheap. Blizzard (a popular gaming company) must deal with widespread account hacking from “gold sellers”, and provides second factor authentication to its gamers for $6 each. But that ignores the enormous support costs involved. How does a person prove their identity to the government in order to get such a token? To replace a lost token? When old tokens break? What happens if somebody’s token is stolen?

And that’s the best case scenario. Other options, like using cellphones as a second factor, are non-starters.

This is actually not a bad recommendation, as far as government services are involved, but it ignores the costs and difficulties involved.

But then the recommendations go on to suggest this for private sector as well:

Specifically, private-sector organizations, including top online retailers, large health insurers, social media companies, and major financial institutions, should use strong authentication solutions as the default for major online applications.

No, no, no. There is no reason for a “top online retailer” to know your identity. I lie about my identity. Amazon.com thinks my name is “Edward Williams”, for example.

They get worse with:

Action Item 1.3.3: The government should serve as a source to validate identity attributes to address online identity challenges.

In other words, they are advocating a cyber-dystopic police-state wet-dream where the government controls everyone’s identity. We already see how this fails with Facebook’s “real name” policy, where everyone from political activists in other countries to LGBTQ in this country get harassed for revealing their real names.

Anonymity and pseudonymity are precious rights on the Internet that we now enjoy — rights endangered by the radical policies in this document. This document frequently claims to promote security “while protecting privacy”. But the government doesn’t protect privacy — much of what we want from cybersecurity is to protect our privacy from government intrusion. This is nothing new, you’ve heard this privacy debate before. What I’m trying to show here is that the one-side view of privacy in this document demonstrates how it’s dominated by special interests.

Cybersecurity Framework

Action Item 1.4.2: All federal agencies should be required to use the Cybersecurity Framework. 

The “Cybersecurity Framework” is a bunch of a nonsense that would require another long blogpost to debunk. It requires months of training and years of experience to understand. It contains things like “DE.CM-4: Malicious code is detected”, as if that’s a thing organizations are able to do.

All the while it ignores the most common cyber attacks (SQL/web injections, phishing, password reuse, DDoS). It’s a typical example where organizations spend enormous amounts of money following process while getting no closer to solving what the processes are attempting to solve. Federal agencies using the Cybersecurity Framework are no safer from my pentests than those who don’t use it.

It gets even crazier:

Action Item 1.5.1: The National Institute of Standards and Technology (NIST) should expand its support of SMBs in using the Cybersecurity Framework and should assess its cost-effectiveness specifically for SMBs.

Small businesses can’t even afford to even read the “Cybersecurity Framework”. Simply reading the doc, trying to understand it, would exceed their entire IT/computer budget for the year. It would take a high-priced consultant earning $500/hour to tell them that “DE.CM-4: Malicious code is detected” means “buy antivirus and keep it up to date”.

Software liability is a hoax invented by the Chinese to make our IoT less competitive

Action Item 2.1.3: The Department of Justice should lead an interagency study with the Departments of Commerce and Homeland Security and work with the Federal Trade Commission, the Consumer Product Safety Commission, and interested private sector parties to assess the current state of the law with regard to liability for harm caused by faulty IoT devices and provide recommendations within 180 days. 

For over a decade, leftists in the cybersecurity industry have been pushing the concept of “software liability”. Every time there is a major new development in hacking, such as the worms around 2003, they come out with documents explaining why there’s a “market failure” and that we need liability to punish companies to fix the problem. Then the problem is fixed, without software liability, and the leftists wait for some new development to push the theory yet again.

It’s especially absurd for the IoT marketspace. The harm, as they imagine, is DDoS. But the majority of devices in Mirai were sold by non-US companies to non-US customers. There’s no way US regulations can stop that.

What US regulations will stop is IoT innovation in the United States. Regulations are so burdensome, and liability lawsuits so punishing, that it will kill all innovation within the United States. If you want to get rich with a clever IoT Kickstarter project, forget about it: you entire development budget will go to cybersecurity. The only companies that will be able to afford to ship IoT products in the United States will be large industrial concerns like GE that can afford the overhead of regulation/liability.

Liability is a left-wing policy issue, not one supported by technical analysis. Software liability has proven to be immaterial in any past problem and current proponents are distorting the IoT market to promote it now.

Cybersecurity workforce

Action Item 4.1.1: The next President should initiate a national cybersecurity workforce program to train 100,000 new cybersecurity practitioners by 2020. 

The problem in our industry isn’t the lack of “cybersecurity practitioners”, but the overabundance of “insecurity practitioners”.

Take “SQL injection” as an example. It’s been the most common way hackers break into websites for 15 years. It happens because programmers, those building web-apps, blinding paste input into SQL queries. They do that because they’ve been trained to do it that way. All the textbooks on how to build webapps teach them this. All the examples show them this.

So you have government programs on one hand pushing tech education, teaching kids to build web-apps with SQL injection. Then you propose to train a second group of people to fix the broken stuff the first group produced.

The solution to SQL/website injections is not more practitioners, but stopping programmers from creating the problems in the first place. The solution to phishing is to use the tools already built into Windows and networks that sysadmins use, not adding new products/practitioners. These are the two most common problems, and they happen not because of a lack of cybersecurity practitioners, but because the lack of cybersecurity as part of normal IT/computers.

I point this to demonstrate yet against that the document was written by policy people with little or no technical understanding of the problem.

Nutritional label

Action Item 3.1.1: To improve consumers’ purchasing decisions, an independent organization should develop the equivalent of a cybersecurity “nutritional label” for technology products and services—ideally linked to a rating system of understandable, impartial, third-party assessment that consumers will intuitively trust and understand. 

This can’t be done. Grab some IoT devices, like my thermostat, my car, or a Xiongmai security camera used in the Mirai botnet. These devices are so complex that no “nutritional label” can be made from them.

One of the things you’d like to know is all the software dependencies, so that if there’s a bug in OpenSSL, for example, then you know your device is vulnerable. Unfortunately, that requires a nutritional label with 10,000 items on it.

Or, one thing you’d want to know is that the device has no backdoor passwords. But that would miss the Xiongmai devices. The web service has no backdoor passwords. If you caught the Telnet backdoor password and removed it, then you’d miss the special secret backdoor that hackers would later reverse engineer.

This is a policy position chasing a non-existent technical issue push by Pieter Zatko, who has gotten hundreds of thousands of dollars from government grants to push the issue. It’s his way of getting rich and has nothing to do with sound policy.

Cyberczars and ambassadors

Various recommendations call for the appointment of various CISOs, Assistant to the President for Cybersecurity, and an Ambassador for Cybersecurity. But nowhere does it mention these should be technical posts. This is like appointing a Surgeon General who is not a doctor.

Government’s problems with cybersecurity stems from the way technical knowledge is so disrespected. The current cyberczar prides himself on his lack of technical knowledge, because that helps him see the bigger picture.

Ironically, many of the other Action Items are about training cybersecurity practitioners, employees, and managers. None of this can happen as long as leadership is clueless. Technical details matter, as I show above with the Mirai botnet. Subtlety and nuance in technical details can call for opposite policy responses.

Conclusion

This document is promoted as being written by technical experts. However, nothing in the document is neutral technical expertise. Instead, it’s almost entirely a policy document dominated by special interests and left-wing politics. In many places it makes recommendations to the incoming Republican president. His response should be to round-file it immediately.

I only chose a few items, as this blogpost is long enough as it is. I could pick almost any of of the 53 Action Items to demonstrate how they are policy, special-interest driven rather than reflecting technical expertise.

Chrome and Firefox Brand The Pirate Bay As a “Phishing” Site…..Again

Post Syndicated from Ernesto original https://torrentfreak.com/chrome-and-firefox-brand-the-pirate-bay-as-a-phishing-site-again-161006/

thepirateMillions of Pirate Bay users are currently unable to access the torrent detail pages on the site without receiving a stark warning.

Over the past few hours Chrome and Firefox have started to block access to ThePirateBay.org due to reported security issues.

The homepage and various categories can be reached without problems, but when visitors navigate to a download page they are presented with an ominous red warning banner.

“Deceptive site ahead: Attackers on Thepiratebay.org may trick you into doing something dangerous like installing software or revealing your personal information,” it reads.

“Google Safe Browsing recently detected phishing on thepiratebay.org. Phishing sites pretend to be other websites to trick you,” the Chrome warning adds.

Chrome’s latest Pirate Bay warning

piratebayphishing

Firefox is showing a similar error message, as do all applications and services that use Google’s safe browsing database, which currently lists TPB as “partially dangerous.”

According to Google the notorious torrent site is linked to a phishing effort, where malicious actors try to steal the personal information of visitors.

It’s likely that the security error is caused by a malicious third-party advertisement. The TPB team informs TorrentFreak that they are aware of the issue, which they hope will be resolved soon.

This is not the first time that The Pirate Bay has been flagged by Google’s safe browsing filter. The same happened just a month ago, when the site was accused of spreading “harmful programs.” That warning eventually disappeared after a few days.

By now, most Chrome and Firefox users should be familiar with these intermittent warning notices. Those who are in a gutsy mood can simply “ignore the warning” or take steps (Chrome, FF) to bypass the blocks permanently.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Is WhatsApp Hacked?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/is_whatsapp_hac.html

Forbes is reporting that the Israeli cyberweapons arms manufacturer Wintego has a man-in-the-middle exploit against WhatsApp.

It’s a weird story. I’m not sure how they do it, but something doesn’t sound right.

Another possibility is that CatchApp is malware thrust onto a device over Wi-Fi that specifically targets WhatsApp. But it’s almost certain the product cannot crack the latest standard of WhatsApp cryptography, said Matthew Green, a cryptography expert and assistant professor at the Johns Hopkins Information Security Institute. Green, who has been impressed by the quality of the Signal code, added: “They would have to defeat both the encryption to and from the server and the end-to-end Signal encryption. That does not seem feasible at all, even with a Wi-Fi access point.

“I would bet mundanely the password stuff is just plain phishing. You go to some site, it asks for your Google account, you type it in without looking closely at the address bar.

“But the WhatsApp stuff manifestly should not be vulnerable like that. Interesting.”

Neither WhatsApp nor the crypto whizz behind Signal, Moxie Marlinspike, were happy to comment unless more specific details were revealed about the tool’s capability. Either Wintego is embellishing what its real capability is, or it has a set of exploits that the rest of the world doesn’t yet know about.

Security Design: Stop Trying to Fix the User

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/security_design.html

Every few years, a researcher replicates a security study by littering USB sticks around an organization’s grounds and waiting to see how many people pick them up and plug them in, causing the autorun function to install innocuous malware on their computers. These studies are great for making security professionals feel superior. The researchers get to demonstrate their security expertise and use the results as “teachable moments” for others. “If only everyone was more security aware and had more security training,” they say, “the Internet would be a much safer place.”

Enough of that. The problem isn’t the users: it’s that we’ve designed our computer systems’ security so badly that we demand the user do all of these counterintuitive things. Why can’t users choose easy-to-remember passwords? Why can’t they click on links in emails with wild abandon? Why can’t they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem?

Traditionally, we’ve thought about security and usability as a trade-off: a more secure system is less functional and more annoying, and a more capable, flexible, and powerful system is less secure. This “either/or” thinking results in systems that are neither usable nor secure.

Our industry is littered with examples. First: security warnings. Despite researchers’ good intentions, these warnings just inure people to them. I’ve read dozens of studies about how to get people to pay attention to security warnings. We can tweak their wording, highlight them in red, and jiggle them on the screen, but nothing works because users know the warnings are invariably meaningless. They don’t see “the certificate has expired; are you sure you want to go to this webpage?” They see, “I’m an annoying message preventing you from reading a webpage. Click here to get rid of me.”

Next: passwords. It makes no sense to force users to generate passwords for websites they only log in to once or twice a year. Users realize this: they store those passwords in their browsers, or they never even bother trying to remember them, using the “I forgot my password” link as a way to bypass the system completely — ­effectively falling back on the security of their e-mail account.

And finally: phishing links. Users are free to click around the Web until they encounter a link to a phishing website. Then everyone wants to know how to train the user not to click on suspicious links. But you can’t train users not to click on links when you’ve spent the past two decades teaching them that links are there to be clicked.

We must stop trying to fix the user to achieve security. We’ll never get there, and research toward those goals just obscures the real problems. Usable security does not mean “getting people to do what we want.” It means creating security that works, given (or despite) what people do. It means security solutions that deliver on users’ security goals without­ — as the 19th-century Dutch cryptographer Auguste Kerckhoffs aptly put it­ — “stress of mind, or knowledge of a long series of rules.”

I’ve been saying this for years. Security usability guru (and one of the guest editors of this issue) M. Angela Sasse has been saying it even longer. People — ­and developers — ­are finally starting to listen. Many security updates happen automatically so users don’t have to remember to manually update their systems. Opening a Word or Excel document inside Google Docs isolates it from the user’s system so they don’t have to worry about embedded malware. And programs can run in sandboxes that don’t compromise the entire computer. We’ve come a long way, but we have a lot further to go.

“Blame the victim” thinking is older than the Internet, of course. But that doesn’t make it right. We owe it to our users to make the Information Age a safe place for everyone — ­not just those with “security awareness.”

This essay previously appeared in the Sep/Oct issue of IEEE Security & Privacy.

The Cost of Cyberattacks Is Less than You Might Think

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/the_cost_of_cyb.html

Interesting research from Sasha Romanosky at RAND:

Abstract: In 2013, the US President signed an executive order designed to help secure the nation’s critical infrastructure from cyberattacks. As part of that order, he directed the National Institute for Standards and Technology (NIST) to develop a framework that would become an authoritative source for information security best practices. Because adoption of the framework is voluntary, it faces the challenge of incentivizing firms to follow along. Will frameworks such as that proposed by NIST really induce firms to adopt better security controls? And if not, why? This research seeks to examine the composition and costs of cyber events, and attempts to address whether or not there exist incentives for firms to improve their security practices and reduce the risk of attack. Specifically, we examine a sample of over 12 000 cyber events that include data breaches, security incidents, privacy violations, and phishing crimes. First, we analyze the characteristics of these breaches (such as causes and types of information compromised). We then examine the breach and litigation rate, by industry, and identify the industries that incur the greatest costs from cyber events. We then compare these costs to bad debts and fraud within other industries. The findings suggest that public concerns regarding the increasing rates of breaches and legal actions may be excessive compared to the relatively modest financial impact to firms that suffer these events. Public concerns regarding the increasing rates of breaches and legal actions, conflict, however, with our findings that show a much smaller financial impact to firms that suffer these events. Specifically, we find that the cost of a typical cyber incident in our sample is less than $200 000 (about the same as the firm’s annual IT security budget), and that this represents only 0.4% of their estimated annual revenues.

The result is that it often makes business sense to underspend on cybersecurity and just pay the costs of breaches:

Romanosky analyzed 12,000 incident reports and found that typically they only account for 0.4 per cent of a company’s annual revenues. That compares to billing fraud, which averages at 5 per cent, or retail shrinkage (ie, shoplifting and insider theft), which accounts for 1.3 per cent of revenues.

As for reputational damage, Romanosky found that it was almost impossible to quantify. He spoke to many executives and none of them could give a reliable metric for how to measure the PR cost of a public failure of IT security systems.

He also noted that the effects of a data incident typically don’t have many ramifications on the stock price of a company in the long term. Under the circumstances, it doesn’t make a lot of sense to invest too much in cyber security.

What’s being left out of these costs are the externalities. Yes, the costs to a company of a cyberattack are low to them, but there are often substantial additional costs borne by other people. The way to look at this is not to conclude that cybersecurity isn’t really a problem, but instead that there is a significant market failure that governments need to address.

Notes on that StJude/MuddyWatters/MedSec thing

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/notes-on-that-stjudemuddywattersmedsec.html

I thought I’d write up some notes on the StJude/MedSec/MuddyWaters affair. Some references: [1] [2] [3] [4].

The story so far

tl;dr: hackers drop 0day on medical device company hoping to profit by shorting their stock

St Jude Medical (STJ) is one of the largest providers of pacemakers (aka. cardiac devices) in the country, around ~$2.5 billion in revenue, which accounts for about half their business. They provide “smart” pacemakers with an on-board computer that talks via radio-waves to a nearby monitor that records the functioning of the device (and health data). That monitor, “Merlin@Home“, then talks back up to St Jude (via phone lines, 3G cell phone, or wifi). Pretty much all pacemakers work that way (my father’s does, although his is from a different vendor).

MedSec is a bunch of cybersecurity researchers (white-hat hackers) who have been investigating medical devices. In theory, their primary business is to sell their services to medical device companies, to help companies secure their devices. Their CEO is Justine Bone, a long-time white-hat hacker. Despite Muddy Waters garbling the research, there’s no reason to doubt that there’s quality research underlying all this.

Muddy Waters is an investment company known for investigating companies, finding problems like accounting fraud, and profiting by shorting the stock of misbehaving companies.

Apparently, MedSec did a survey of many pacemaker manufacturers, chose the one with the most cybersecurity problems, and went to Muddy Waters with their findings, asking for a share of the profits Muddy Waters got from shorting the stock.

Muddy Waters published their findings in [1] above. St Jude published their response in [2] above. They are both highly dishonest. I point that out because people want to discuss the ethics of using 0day to short stock when we should talk about the ethics of lying.

“Why you should sell the stock” [finance issues]

In this section, I try to briefly summarize Muddy Water’s argument why St Jude’s stock will drop. I’m not an expert in this area (though I do a bunch of investment), but they do seem flimsy to me.
Muddy Water’s argument is that these pacemakers are half of St Jude’s business, and that fixing them will first require recalling them all, then take another 2 year to fix, during which time they can’t be selling pacemakers. Much of the Muddy Waters paper is taken up explaining this, citing similar medical cases, and so on.
If at all true, and if the cybersecurity claims hold up, then yes, this would be good reason to short the stock. However, I suspect they aren’t true — and they are simply trying to scare people about long-term consequences allowing Muddy Waters to profit in the short term.
@selenakyle on Twitter suggests this interest document [4] about market-solutions to vuln-disclosure, if you are interested in this angle of things.
Update from @lippard: Abbot Labs agreed in April to buy St Jude at $85 a share (when St Jude’s stock was $60/share). Presumable, for this Muddy Waters attack on St Jude’s stock price to profit from anything more than a really short term stock drop (like dumping their short position today), Muddy Waters would have believe this effort will cause Abbot Labs to walk away from the deal. Normally, there are penalties for doing so, but material things like massive vulnerabilities in a product should allow Abbot Labs to walk away without penalties.

The 0day being dropped

Well, they didn’t actually drop 0day as such, just claims that 0day exists — that it’s been “demonstrated”. Reading through their document a few times, I’ve created a list of the 0day they found, to the granularity that one would expect from CVE numbers (CVE is group within the Department of Homeland security that assigns standard reference numbers to discovered vulnerabilities).

The first two, which can kill somebody, are the salient ones. The others are more normal cybersecurity issues, and may be of concern because they can leak HIPAA-protected info.

CVE-2016-xxxx: Pacemaker can be crashed, leading to death
Within a reasonable distance (under 50 feet) over several hours, pounding the pacemaker with malformed packets (either from an SDR or a hacked version of the Merlin@Home monitor), the pacemaker can crash. Sometimes such crashes will brick the device, other times put it into a state that may kill the patient by zapping the heart too quickly.

CVE-2016-xxxx: Pacemaker power can be drained, leading to death
Within a reasonable distance (under 50 feet) over several days, the pacemaker’s power can slowly be drained at the rate of 3% per hour. While the user will receive a warning from their Merlin@Home monitoring device that the battery is getting low, it’s possible the battery may be fully depleted before they can get to a doctor for a replacement. A non-functioning pacemaker may lead to death.

CVE-2016-xxxx: Pacemaker uses unauthenticated/unencrypted RF protocol
The above two items are possible because there is no encryption nor authentication in the wireless protocol, allowing any evildoer access to the pacemaker device or the monitoring device.

CVE-2016-xxxx: Merlin@Home contained hard-coded credentials and SSH keys
The password to connect to the St Jude network is the same for all device, and thus easily reverse engineered.

CVE-2016-xxxx: local proximity wand not required
It’s unclear in the report, but it seems that most other products require a wand in local promixity (inches) in order to enable communication with the pacemaker. This seems like a requirement — otherwise, even with authentication, remote RF would be able to drain the device in the person’s chest.

So these are, as far as I can tell, the explicit bugs they outline. Unfortunately, none are described in detail. I don’t see enough detail for any of these to actually be assigned a CVE number. I’m being generous here, trying to describe them as such, giving them the benefit of the doubt, there’s enough weasel language in there that makes me doubt all of them. Though, if the first two prove not to be reproducible, then there will be a great defamation case, so I presume those two are true.

The movie/TV plot scenarios

So if you wanted to use this as a realistic TV/movie plot, here are two of them.
#1 You (the executive of the acquiring company) are meeting with the CEO and executives of a smaller company you want to buy. It’s a family concern, and the CEO really doesn’t want to sell. But you know his/her children want to sell. Therefore, during the meeting, you pull out your notebook and an SDR device and put it on the conference room table. You start running the exploit to crash that CEO’s pacemaker. It crashes, the CEO grabs his/her chest, who gets carted off the hospital. The children continue negotiations, selling off their company.
#2 You are a hacker in Russia going after a target. After many phishing attempts, you finally break into the home desktop computer. From that computer, you branch out and connect to the Merlin@Home devices through the hard-coded password. You then run an exploit from the device, using that device’s own radio, to slowly drain the battery from the pacemaker, day after day, while the target sleeps. You patch the software so it no longer warns the user that the battery is getting low. The battery dies, and a few days later while the victim is digging a ditch, s/he falls over dead from heart failure.

The Muddy Water’s document is crap

There are many ethical issues, but the first should be dishonesty and spin of the Muddy Waters research report.

The report is clearly designed to scare other investors to drop St Jude stock price in the short term so that Muddy Waters can profit. It’s not designed to withstand long term scrutiny. It’s full of misleading details and outright lies.

For example, it keeps stressing how shockingly bad the security vulnerabilities are, such as saying:

We find STJ Cardiac Devices’ vulnerabilities orders of magnitude more worrying than the medical device hacks that have been publicly discussed in the past. 

This is factually untrue. St Jude problems are no worse than the 2013 issue where doctors disable the RF capabilities of Dick Cheney’s pacemaker in response to disclosures. They are no worse than that insulin pump hack. Bad cybersecurity is the norm for medical devices. St Jude may be among the worst, but not by an order-of-magnitude.

The term “orders of magnitude” is math, by the way, and means “at least 100 times worse”. As an expert, I claim these problems are not even one order of magnitude (10 times worse). I challenge MedSec’s experts to stand behind the claim that these vulnerabilities are at least 100 times worse than other public medical device hacks.

In many places, the language is wishy-washy. Consider this quote:

Despite having no background in cybersecurity, Muddy Waters has been able to replicate in-house key exploits that help to enable these attacks

The semantic content of this is nil. It says they weren’t able to replicate the attacks themselves. They don’t have sufficient background in cybersecurity to understand what they replicated.

Such language is pervasive throughout the document, things that aren’t technically lies, but which aren’t true, either.

Also pervasive throughout the document, repeatedly interjected for no reason in the middle of text, are statements like this, repeatedly stressing why you should sell the stock:

Regardless, we have little doubt that STJ is about to enter a period of protracted litigation over these products. Should these trials reach verdicts, we expect the courts will hold that STJ has been grossly negligent in its product design. (We estimate awards could total $6.4 billion.15)

I point this out because Muddy Waters obviously doesn’t feel the content of the document stands on its own, so that you can make this conclusion yourself. It instead feels the need to repeat this message over and over on every page.

Muddy Waters violation of Kerckhoff’s Principle

One of the most important principles of cyber security is Kerckhoff’s Principle, that more openness is better. Or, phrased another way, that trying to achieve security through obscurity is bad.

The Muddy Water’s document attempts to violate this principle. Besides the the individual vulnerabilities, it makes the claim that St Jude cybersecurity is inherently bad because it’s open. it uses off-the-shelf chips, standard software (line Linux), and standard protocols. St Jude does nothing to hide or obfuscate these things.

Everyone in cybersecurity would agree this is good. Muddy Waters claims this is bad.

For example, some of their quotes:

One competitor went as far as developing a highly proprietary embedded OS, which is quite costly and rarely seen

In contrast, the other manufacturers have proprietary RF chips developed specifically for their protocols

Again, as the cybersecurity experts in this case, I challenge MedSec to publicly defend Muddy Waters in these claims.

Medical device manufacturers should do the opposite of what Muddy Waters claims. I’ll explain why.

Either your system is secure or it isn’t. If it’s secure, then making the details public won’t hurt you. If it’s insecure, then making the details obscure won’t help you: hackers are far more adept at reverse engineering than you can possibly understand. Making things obscure, though, does stop helpful hackers (i.e. cybersecurity consultants you hire) from making your system secure, since it’s hard figuring out the details.

Said another way: your adversaries (such as me) hate seeing open systems that are obviously secure. We love seeing obscure systems, because we know you couldn’t possibly have validated their security.

The point is this: Muddy Waters is trying to profit from the public’s misconception about cybersecurity, namely that obscurity is good. The actual principle is that obscurity is bad.

St Jude’s response was no better

In response to the Muddy Water’s document, St Jude published this document [2]. It’s equally full of lies — the sort that may deserve a share holder lawsuit. (I see lawsuits galore over this). It says the following:

We have examined the allegations made by Capital and MedSec on August 25, 2016 regarding the safety and security of our pacemakers and defibrillators, and while we would have preferred the opportunity to review a detailed account of the information, based on available information, we conclude that the report is false and misleading.

If that’s true, if they can prove this in court, then that will mean they could win millions in a defamation lawsuit against Muddy Waters, and millions more for stock manipulation.

But it’s almost certainly not true. Without authentication/encryption, then the fact that hackers can crash/drain a pacemaker is pretty obvious, especially since (as claimed by Muddy Waters), they’ve successfully done it. Specifically, the picture on page 17 of the 34 page Muddy Waters document is a smoking gun of a pacemaker misbehaving.

The rest of their document contains weasel-word denials that may be technically true, but which have no meaning.

St. Jude Medical stands behind the security and safety of our devices as confirmed by independent third parties and supported through our regulatory submissions. 

Our software has been evaluated and assessed by several independent organizations and researchers including Deloitte and Optiv.

In 2015, we successfully completed an upgrade to the ISO 27001:2013 certification.

These are all myths of the cybersecurity industry. Conformance with security standards, such as ISO 27001:2013, has absolutely zero bearing on whether you are secure. Having some consultants/white-hat claim your product is secure doesn’t mean other white-hat hackers won’t find an insecurity.

Indeed, having been assessed by Deloitte is a good indicator that something is wrong. It’s not that they are incompetent (they’ve got some smart people working for them), but ultimately the way the security market works is that you demand of such auditors that the find reasons to believe your product is secure, not that they keep hunting until something is found that is insecure. It’s why outsiders, like MedSec, are better, because they strive to find why your product is insecure. The bigger the enemy, the more resources they’ll put into finding a problem.

It’s like after you get a hair cut, your enemies and your friends will have different opinions on your new look. Enemies are more honest.

The most obvious lie from the St Jude response is the following:

The report claimed that the battery could be depleted at a 50-foot range. This is not possible since once the device is implanted into a patient, wireless communication has an approximate 7-foot range. This brings into question the entire testing methodology that has been used as the basis for the Muddy Waters Capital and MedSec report.

That’s not how wireless works. With directional antennas and amplifiers, 7-feet easily becomes 50-feet or more. Even without that, something designed for reliable operation at 7-feet often works less reliably at 50-feet. There’s no cutoff at 7-feet within which it will work, outside of which it won’t.

That St Jude deliberately lies here brings into question their entire rebuttal. (see what I did there?)

ETHICS EHTICS ETHICS

First let’s discuss the ethics of lying, using weasel words, and being deliberately misleading. Both St Jude and Muddy Waters do this, and it’s ethically wrong. I point this out to uninterested readers who want to get at that other ethical issue. Clear violations of ethics we all agree interest nobody — but they ought to. We should be lambasting Muddy Waters for their clear ethical violations, not the unclear one.

So let’s get to the ethical issue everyone wants to discuss:

Is it ethical to profit from shorting stock while dropping 0day.

Let’s discuss some of the issues.

There’s no insider trading. Some people wonder if there are insider trading issues. There aren’t. While it’s true that Muddy Waters knew some secrets that nobody else knew, as long as they weren’t insider secrets, it’s not insider trading. In other words, only insiders know about a key customer contract won or lost recently. But, vulnerabilities researched by outsiders is still outside the company.

Watching a CEO walk into the building of a competitor is still outsider knowledge — you can trade on the likely merger, even though insider employees cannot.

Dropping 0day might kill/harm people. That may be true, but that’s never an ethical reason to not drop it. That’s because it’s not this one event in isolation. If companies knew ethical researchers would never drop an 0day, then they’d never patch it. It’s like the government’s warrantless surveillance of American citizens: the courts won’t let us challenge it, because we can’t prove it exists, and we can’t prove it exists, because the courts allow it to be kept secret, because revealing the surveillance would harm national intelligence. That harm may happen shouldn’t stop the right thing from happening.

In other words, in the long run, dropping this 0day doesn’t necessarily harm people — and thus profiting on it is not an ethical issue. We need incentives to find vulns. This moves the debate from an ethical one to more of a factual debate about the long-term/short-term risk from vuln disclosure.

As MedSec points out, St Jude has already proven itself an untrustworthy consumer of vulnerability disclosures. When that happens, the dropping 0day is ethically permissible for “responsible disclosure”. Indeed, that St Jude then lied about it in their response ex post facto justifies the dropping of the 0day.

No 0day was actually dropped here. In this case, what was dropped was claims of 0day. This may be good or bad, depending on your arguments. It’s good that the vendor will have some extra time to fix the problems before hackers can start exploiting them. It’s bad because we can’t properly evaluate the true impact of the 0day unless we get more detail — allowing Muddy Waters to exaggerate and mislead people in order to move the stock more than is warranted.

In other words, the lack of actual 0day here is the problem — actual 0day would’ve been better.

This 0day is not necessarily harmful. Okay, it is harmful, but it requires close proximity. It’s not as if the hacker can reach out from across the world and kill everyone (barring my movie-plot section above). If you are within 50 feet of somebody, it’s easier shooting, stabbing, or poisoning them.

Shorting on bad news is common. Before we address the issue whether this is unethical for cybersecurity researchers, we should first address the ethics for anybody doing this. Muddy Waters already does this by investigating companies for fraudulent accounting practice, then shorting the stock while revealing the fraud.

Yes, it’s bad that Muddy Waters profits on the misfortunes of others, but it’s others who are doing fraud — who deserve it. [Snide capitalism trigger warning] To claim this is unethical means you are a typical socialist who believe the State should defend companies, even those who do illegal thing, in order to stop illegitimate/windfall profits. Supporting the ethics of this means you are a capitalist, who believe companies should succeed or fail on their own merits — which means bad companies need to fail, and investors in those companies should lose money.

Yes, this is bad for cybersec research. There is constant tension between cybersecurity researchers doing “responsible” (sic) research and companies lobbying congress to pass laws against it. We see this recently how Detroit lobbied for DMCA (copyright) rules to bar security research, and how the DMCA regulators gave us an exemption. MedSec’s action means now all medical devices manufacturers will now lobby congress for rules to stop MedSec — and the rest of us security researchers. The lack of public research means medical devices will continue to be flawed, which is worse for everyone.

Personally, I don’t care about this argument. How others might respond badly to my actions is not an ethical constraint on my actions. It’s like speech: that others may be triggered into lobbying for anti-speech laws is still not constraint on what ethics allow me to say.

There were no lies or betrayal in the research. For me, “ethics” is usually a problem of lying, cheating, theft, and betrayal. As long as these things don’t happen, then it’s ethically okay. If MedSec had been hired by St Jude, had promised to keep things private, and then later disclosed them, then we’d have an ethical problem. Or consider this: frequently clients ask me to lie or omit things in pentest reports. It’s an ethical quagmire. The quick answer, by the way, is “can you make that request in writing?”. The long answer is “no”. It’s ethically permissible to omit minor things or do minor rewording, but not when it impinges on my credibility.

A life is worth about $10-million. Most people agree that “you can’t put value on a human life”, and that those who do are evil. The opposite is true. Should we spend more on airplane safety, breast cancer research, or the military budget to fight ISIS. Each can be measured in the number of lives saved. Should we spend more on breast cancer research, which affects people in their 30s, or solving heart disease, which affects people’s in their 70s? All these decisions means putting value on human life, and sometimes putting different value on human life. Whether you think it’s ethical, it’s the way the world works.

Thus, we can measure this disclosure of 0day in terms of potential value of life lost, vs. potential value of life saved.

Is this market manipulation? This is more of a legal question than an ethical one, but people are discussing it. If the data is true, then it’s not “manipulation” — only if it’s false. As documented in this post, there’s good reason to doubt the complete truth of what Muddy Waters claims. I suspect it’ll cost Muddy Waters more in legal fees in the long run than they could possibly hope to gain in the short run. I recommend investment companies stick to areas of their own expertise (accounting fraud) instead of branching out into things like cyber where they really don’t grasp things.

This is again bad for security research. Frankly, we aren’t a trusted community, because we claim the “sky is falling” too often, and are proven wrong. As this is proven to be market manipulation, as the stock recovers back to its former level, and the scary stories of mass product recalls fail to emerge, we’ll be blamed yet again for being wrong. That hurts are credibility.

On the other the other hand, if any of the scary things Muddy Waters claims actually come to pass, then maybe people will start heading our warnings.

Ethics conclusion: I’m a die-hard troll, so therefore I’m going to vigorously defend the idea of shorting stock while dropping 0day. (Most of you appear to think it’s unethical — I therefore must disagree with you).  But I’m also a capitalist. This case creates an incentive to drop harmful 0days — but it creates an even greater incentive for device manufacturers not to have 0days to begin with. Thus, despite being a dishonest troll, I do sincerely support the ethics of this.

Conclusion

The two 0days are about crashing the device (killing the patient sooner) or draining the battery (killin them later). Both attacks require hours (if not days) in close proximity to the target. If you can get into the local network (such as through phishing), you might be able to hack the Merlin@Home monitor, which is in close proximity to the target for hours every night.

Muddy Waters thinks the security problems are severe enough that it’ll destroy St Jude’s $2.5 billion pacemaker business. The argument is flimsy. St Jude’s retort is equally flimsy.

My prediction: a year from now we’ll see little change in St Jude’s pacemaker business earners, while there may be some one time costs cleaning some stuff up. This will stop the shenanigans of future 0day+shorting, even when it’s valid, because nobody will believe researchers.