Tag Archives: 0day

Why Linus is right (as usual)

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/11/why-linus-is-right-as-usual.html

People are debating this email from Linus Torvalds (maintainer of the Linux kernel). It has strong language, like:

Some security people have scoffed at me when I say that security
problems are primarily “just bugs”.
Those security people are f*cking morons.
Because honestly, the kind of security person who doesn’t accept that
security problems are primarily just bugs, I don’t want to work with.

I thought I’d explain why Linus is right.
Linus has an unwritten manifesto of how the Linux kernel should be maintained. It’s not written down in one place, instead we are supposed to reverse engineer it from his scathing emails, where he calls people morons for not understanding it. This is one such scathing email. The rules he’s expressing here are:
  • Large changes to the kernel should happen in small iterative steps, each one thoroughly debugged.
  • Minor security concerns aren’t major emergencies; they don’t allow bypassing the rules more than any other bug/feature.
Last year, some security “hardening” code was added to the kernel to prevent a class of buffer-overflow/out-of-bounds issues. This code didn’t address any particular 0day vulnerability, but was designed to prevent a class of future potential exploits from being exploited. This is reasonable.
This code had bugs, but that’s no sin. All code has bugs.
The sin, from Linus’s point of view, is that when an overflow/out-of-bounds access was detected, the code would kill the user-mode process or kernel. Linus thinks it should have only generated warnings, and let the offending code continue to run.
Of course, that would in theory make the change of little benefit, because it would no longer prevent 0days from being exploited.
But warnings would only be temporary, the first step. There’s likely to be be bugs in the large code change, and it would probably uncover bugs in other code. While bounds-checking is a security issue, its first implementation will always find existing code having latent bounds bugs. Or, it’ll have “false-positives” triggering on things that aren’t actually the flaws its looking for. Killing things made these bugs worse, causing catastrophic failures in the latest kernel that didn’t exist before. Warnings, however, would have equally highlighted the bugs, but without causing catastrophic failures. My car runs multiple copies of Linux — such catastrophic failures would risk my life.
Only after a year, when the bugs have been fixed, would the default behavior of the code be changed to kill buggy code, thus preventing exploitation.
In other words, large changes to the kernel should happen in small, manageable steps. This hardening hasn’t existed for 25 years of the Linux kernel, so there’s no emergency requiring it be added immediately rather than conservatively, no reason to bypass Linus’s development processes. There’s no reason it couldn’t have been warnings for a year while working out problems, followed by killing buggy code later.
Linus was correct here. No vuln has appeared in the last year that this code would’ve stopped, so the fact that it killed processes/kernels rather than generated warnings was unnecessary. Conversely, because it killed things, bugs in the kernel code were costly, and required emergency patches.
Despite his unreasonable tone, Linus is a hugely reasonable person. He’s not trying to stop changes to the kernel. He’s not trying to stop security improvements. He’s not even trying to stop processes from getting killed That’s not why people are moronic. Instead, they are moronic for not understanding that large changes need to made conservatively, and security issues are no more important than any other feature/bug.

Update: Also, since most security people aren’t developers, they are also a bit clueless how things actually work. Bounds-checking, which they define as purely a security feature to stop buffer-overflows is actually overwhelmingly a debugging feature. When you turn on bounds-checking for the first time, it’ll trigger on a lot of latent bugs in the code — things that never caused a problem in the past (like reading past ends of buffers) but cause trouble now. Developers know this, security “experts” tend not to. These kernel changes were made by security people who failed to understand this, who failed to realize that their changes would uncover lots of bugs in existing code, and that killing buggy code was hugely inappropriate.

Update: Another flaw developers are intimately familiar with is how “hardening” code can cause false-positives, triggering on non-buggy code. A good example is where the BIND9 code crashed on an improper assert(). This hardening code designed to prevent exploitation made things worse by triggering on valid input/code.

Update: No, it’s probably not okay to call people “morons” as Linus does. They may be wrong, but they usually are reasonable people. On the other hand, security people tend to be sanctimonious bastards with rigid thinking, so after he has dealt with that minority, I can see why Linus treats all security people that way.

"Responsible encryption" fallacies

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/10/responsible-encryption-fallacies.html

Deputy Attorney General Rod Rosenstein gave a speech recently calling for “Responsible Encryption” (aka. “Crypto Backdoors”). It’s full of dangerous ideas that need to be debunked.

The importance of law enforcement

The first third of the speech talks about the importance of law enforcement, as if it’s the only thing standing between us and chaos. It cites the 2016 Mirai attacks as an example of the chaos that will only get worse without stricter law enforcement.

But the Mira case demonstrated the opposite, how law enforcement is not needed. They made no arrests in the case. A year later, they still haven’t a clue who did it.

Conversely, we technologists have fixed the major infrastructure issues. Specifically, those affected by the DNS outage have moved to multiple DNS providers, including a high-capacity DNS provider like Google and Amazon who can handle such large attacks easily.

In other words, we the people fixed the major Mirai problem, and law-enforcement didn’t.

Moreover, instead being a solution to cyber threats, law enforcement has become a threat itself. The DNC didn’t have the FBI investigate the attacks from Russia likely because they didn’t want the FBI reading all their files, finding wrongdoing by the DNC. It’s not that they did anything actually wrong, but it’s more like that famous quote from Richelieu “Give me six words written by the most honest of men and I’ll find something to hang him by”. Give all your internal emails over to the FBI and I’m certain they’ll find something to hang you by, if they want.
Or consider the case of Andrew Auernheimer. He found AT&T’s website made public user accounts of the first iPad, so he copied some down and posted them to a news site. AT&T had denied the problem, so making the problem public was the only way to force them to fix it. Such access to the website was legal, because AT&T had made the data public. However, prosecutors disagreed. In order to protect the powerful, they twisted and perverted the law to put Auernheimer in jail.

It’s not that law enforcement is bad, it’s that it’s not the unalloyed good Rosenstein imagines. When law enforcement becomes the thing Rosenstein describes, it means we live in a police state.

Where law enforcement can’t go

Rosenstein repeats the frequent claim in the encryption debate:

Our society has never had a system where evidence of criminal wrongdoing was totally impervious to detection

Of course our society has places “impervious to detection”, protected by both legal and natural barriers.

An example of a legal barrier is how spouses can’t be forced to testify against each other. This barrier is impervious.

A better example, though, is how so much of government, intelligence, the military, and law enforcement itself is impervious. If prosecutors could gather evidence everywhere, then why isn’t Rosenstein prosecuting those guilty of CIA torture?

Oh, you say, government is a special exception. If that were the case, then why did Rosenstein dedicate a precious third of his speech discussing the “rule of law” and how it applies to everyone, “protecting people from abuse by the government”. It obviously doesn’t, there’s one rule of government and a different rule for the people, and the rule for government means there’s lots of places law enforcement can’t go to gather evidence.

Likewise, the crypto backdoor Rosenstein is demanding for citizens doesn’t apply to the President, Congress, the NSA, the Army, or Rosenstein himself.

Then there are the natural barriers. The police can’t read your mind. They can only get the evidence that is there, like partial fingerprints, which are far less reliable than full fingerprints. They can’t go backwards in time.

I mention this because encryption is a natural barrier. It’s their job to overcome this barrier if they can, to crack crypto and so forth. It’s not our job to do it for them.

It’s like the camera that increasingly comes with TVs for video conferencing, or the microphone on Alexa-style devices that are always recording. This suddenly creates evidence that the police want our help in gathering, such as having the camera turned on all the time, recording to disk, in case the police later gets a warrant, to peer backward in time what happened in our living rooms. The “nothing is impervious” argument applies here as well. And it’s equally bogus here. By not helping police by not recording our activities, we aren’t somehow breaking some long standing tradit

And this is the scary part. It’s not that we are breaking some ancient tradition that there’s no place the police can’t go (with a warrant). Instead, crypto backdoors breaking the tradition that never before have I been forced to help them eavesdrop on me, even before I’m a suspect, even before any crime has been committed. Sure, laws like CALEA force the phone companies to help the police against wrongdoers — but here Rosenstein is insisting I help the police against myself.

Balance between privacy and public safety

Rosenstein repeats the frequent claim that encryption upsets the balance between privacy/safety:

Warrant-proof encryption defeats the constitutional balance by elevating privacy above public safety.

This is laughable, because technology has swung the balance alarmingly in favor of law enforcement. Far from “Going Dark” as his side claims, the problem we are confronted with is “Going Light”, where the police state monitors our every action.

You are surrounded by recording devices. If you walk down the street in town, outdoor surveillance cameras feed police facial recognition systems. If you drive, automated license plate readers can track your route. If you make a phone call or use a credit card, the police get a record of the transaction. If you stay in a hotel, they demand your ID, for law enforcement purposes.

And that’s their stuff, which is nothing compared to your stuff. You are never far from a recording device you own, such as your mobile phone, TV, Alexa/Siri/OkGoogle device, laptop. Modern cars from the last few years increasingly have always-on cell connections and data recorders that record your every action (and location).

Even if you hike out into the country, when you get back, the FBI can subpoena your GPS device to track down your hidden weapon’s cache, or grab the photos from your camera.

And this is all offline. So much of what we do is now online. Of the photographs you own, fewer than 1% are printed out, the rest are on your computer or backed up to the cloud.

Your phone is also a GPS recorder of your exact position all the time, which if the government wins the Carpenter case, they police can grab without a warrant. Tagging all citizens with a recording device of their position is not “balance” but the premise for a novel more dystopic than 1984.

If suspected of a crime, which would you rather the police searched? Your person, houses, papers, and physical effects? Or your mobile phone, computer, email, and online/cloud accounts?

The balance of privacy and safety has swung so far in favor of law enforcement that rather than debating whether they should have crypto backdoors, we should be debating how to add more privacy protections.

“But it’s not conclusive”

Rosenstein defends the “going light” (“Golden Age of Surveillance”) by pointing out it’s not always enough for conviction. Nothing gives a conviction better than a person’s own words admitting to the crime that were captured by surveillance. This other data, while copious, often fails to convince a jury beyond a reasonable doubt.
This is nonsense. Police got along well enough before the digital age, before such widespread messaging. They solved terrorist and child abduction cases just fine in the 1980s. Sure, somebody’s GPS location isn’t by itself enough — until you go there and find all the buried bodies, which leads to a conviction. “Going dark” imagines that somehow, the evidence they’ve been gathering for centuries is going away. It isn’t. It’s still here, and matches up with even more digital evidence.
Conversely, a person’s own words are not as conclusive as you think. There’s always missing context. We quickly get back to the Richelieu “six words” problem, where captured communications are twisted to convict people, with defense lawyers trying to untwist them.

Rosenstein’s claim may be true, that a lot of criminals will go free because the other electronic data isn’t convincing enough. But I’d need to see that claim backed up with hard studies, not thrown out for emotional impact.

Terrorists and child molesters

You can always tell the lack of seriousness of law enforcement when they bring up terrorists and child molesters.
To be fair, sometimes we do need to talk about terrorists. There are things unique to terrorism where me may need to give government explicit powers to address those unique concerns. For example, the NSA buys mobile phone 0day exploits in order to hack terrorist leaders in tribal areas. This is a good thing.
But when terrorists use encryption the same way everyone else does, then it’s not a unique reason to sacrifice our freedoms to give the police extra powers. Either it’s a good idea for all crimes or no crimes — there’s nothing particular about terrorism that makes it an exceptional crime. Dead people are dead. Any rational view of the problem relegates terrorism to be a minor problem. More citizens have died since September 8, 2001 from their own furniture than from terrorism. According to studies, the hot water from the tap is more of a threat to you than terrorists.
Yes, government should do what they can to protect us from terrorists, but no, it’s not so bad of a threat that requires the imposition of a military/police state. When people use terrorism to justify their actions, it’s because they trying to form a military/police state.
A similar argument works with child porn. Here’s the thing: the pervs aren’t exchanging child porn using the services Rosenstein wants to backdoor, like Apple’s Facetime or Facebook’s WhatsApp. Instead, they are exchanging child porn using custom services they build themselves.
Again, I’m (mostly) on the side of the FBI. I support their idea of buying 0day exploits in order to hack the web browsers of visitors to the secret “PlayPen” site. This is something that’s narrow to this problem and doesn’t endanger the innocent. On the other hand, their calls for crypto backdoors endangers the innocent while doing effectively nothing to address child porn.
Terrorists and child molesters are a clichéd, non-serious excuse to appeal to our emotions to give up our rights. We should not give in to such emotions.

Definition of “backdoor”

Rosenstein claims that we shouldn’t call backdoors “backdoors”:

No one calls any of those functions [like key recovery] a “back door.”  In fact, those capabilities are marketed and sought out by many users.

He’s partly right in that we rarely refer to PGP’s key escrow feature as a “backdoor”.

But that’s because the term “backdoor” refers less to how it’s done and more to who is doing it. If I set up a recovery password with Apple, I’m the one doing it to myself, so we don’t call it a backdoor. If it’s the police, spies, hackers, or criminals, then we call it a “backdoor” — even it’s identical technology.

Wikipedia uses the key escrow feature of the 1990s Clipper Chip as a prime example of what everyone means by “backdoor“. By “no one”, Rosenstein is including Wikipedia, which is obviously incorrect.

Though in truth, it’s not going to be the same technology. The needs of law enforcement are different than my personal key escrow/backup needs. In particular, there are unsolvable problems, such as a backdoor that works for the “legitimate” law enforcement in the United States but not for the “illegitimate” police states like Russia and China.

I feel for Rosenstein, because the term “backdoor” does have a pejorative connotation, which can be considered unfair. But that’s like saying the word “murder” is a pejorative term for killing people, or “torture” is a pejorative term for torture. The bad connotation exists because we don’t like government surveillance. I mean, honestly calling this feature “government surveillance feature” is likewise pejorative, and likewise exactly what it is that we are talking about.

Providers

Rosenstein focuses his arguments on “providers”, like Snapchat or Apple. But this isn’t the question.

The question is whether a “provider” like Telegram, a Russian company beyond US law, provides this feature. Or, by extension, whether individuals should be free to install whatever software they want, regardless of provider.

Telegram is a Russian company that provides end-to-end encryption. Anybody can download their software in order to communicate so that American law enforcement can’t eavesdrop. They aren’t going to put in a backdoor for the U.S. If we succeed in putting backdoors in Apple and WhatsApp, all this means is that criminals are going to install Telegram.

If the, for some reason, the US is able to convince all such providers (including Telegram) to install a backdoor, then it still doesn’t solve the problem, as uses can just build their own end-to-end encryption app that has no provider. It’s like email: some use the major providers like GMail, others setup their own email server.

Ultimately, this means that any law mandating “crypto backdoors” is going to target users not providers. Rosenstein tries to make a comparison with what plain-old telephone companies have to do under old laws like CALEA, but that’s not what’s happening here. Instead, for such rules to have any effect, they have to punish users for what they install, not providers.

This continues the argument I made above. Government backdoors is not something that forces Internet services to eavesdrop on us — it forces us to help the government spy on ourselves.
Rosenstein tries to address this by pointing out that it’s still a win if major providers like Apple and Facetime are forced to add backdoors, because they are the most popular, and some terrorists/criminals won’t move to alternate platforms. This is false. People with good intentions, who are unfairly targeted by a police state, the ones where police abuse is rampant, are the ones who use the backdoored products. Those with bad intentions, who know they are guilty, will move to the safe products. Indeed, Telegram is already popular among terrorists because they believe American services are already all backdoored. 
Rosenstein is essentially demanding the innocent get backdoored while the guilty don’t. This seems backwards. This is backwards.

Apple is morally weak

The reason I’m writing this post is because Rosenstein makes a few claims that cannot be ignored. One of them is how he describes Apple’s response to government insistence on weakening encryption doing the opposite, strengthening encryption. He reasons this happens because:

Of course they [Apple] do. They are in the business of selling products and making money. 

We [the DoJ] use a different measure of success. We are in the business of preventing crime and saving lives. 

He swells in importance. His condescending tone ennobles himself while debasing others. But this isn’t how things work. He’s not some white knight above the peasantry, protecting us. He’s a beat cop, a civil servant, who serves us.

A better phrasing would have been:

They are in the business of giving customers what they want.

We are in the business of giving voters what they want.

Both sides are doing the same, giving people what they want. Yes, voters want safety, but they also want privacy. Rosenstein imagines that he’s free to ignore our demands for privacy as long has he’s fulfilling his duty to protect us. He has explicitly rejected what people want, “we use a different measure of success”. He imagines it’s his job to tell us where the balance between privacy and safety lies. That’s not his job, that’s our job. We, the people (and our representatives), make that decision, and it’s his job is to do what he’s told. His measure of success is how well he fulfills our wishes, not how well he satisfies his imagined criteria.

That’s why those of us on this side of the debate doubt the good intentions of those like Rosenstein. He criticizes Apple for wanting to protect our rights/freedoms, and declare they measure success differently.

They are willing to be vile

Rosenstein makes this argument:

Companies are willing to make accommodations when required by the government. Recent media reports suggest that a major American technology company developed a tool to suppress online posts in certain geographic areas in order to embrace a foreign government’s censorship policies. 

Let me translate this for you:

Companies are willing to acquiesce to vile requests made by police-states. Therefore, they should acquiesce to our vile police-state requests.

It’s Rosenstein who is admitting here is that his requests are those of a police-state.

Constitutional Rights

Rosenstein says:

There is no constitutional right to sell warrant-proof encryption.

Maybe. It’s something the courts will have to decide. There are many 1st, 2nd, 3rd, 4th, and 5th Amendment issues here.
The reason we have the Bill of Rights is because of the abuses of the British Government. For example, they quartered troops in our homes, as a way of punishing us, and as a way of forcing us to help in our own oppression. The troops weren’t there to defend us against the French, but to defend us against ourselves, to shoot us if we got out of line.

And that’s what crypto backdoors do. We are forced to be agents of our own oppression. The principles enumerated by Rosenstein apply to a wide range of even additional surveillance. With little change to his speech, it can equally argue why the constant TV video surveillance from 1984 should be made law.

Let’s go back and look at Apple. It is not some base company exploiting consumers for profit. Apple doesn’t have guns, they cannot make people buy their product. If Apple doesn’t provide customers what they want, then customers vote with their feet, and go buy an Android phone. Apple isn’t providing encryption/security in order to make a profit — it’s giving customers what they want in order to stay in business.
Conversely, if we citizens don’t like what the government does, tough luck, they’ve got the guns to enforce their edicts. We can’t easily vote with our feet and walk to another country. A “democracy” is far less democratic than capitalism. Apple is a minority, selling phones to 45% of the population, and that’s fine, the minority get the phones they want. In a Democracy, where citizens vote on the issue, those 45% are screwed, as the 55% impose their will unwanted onto the remainder.

That’s why we have the Bill of Rights, to protect the 49% against abuse by the 51%. Regardless whether the Supreme Court agrees the current Constitution, it is the sort right that might exist regardless of what the Constitution says. 

Obliged to speak the truth

Here is the another part of his speech that I feel cannot be ignored. We have to discuss this:

Those of us who swear to protect the rule of law have a different motivation.  We are obliged to speak the truth.

The truth is that “going dark” threatens to disable law enforcement and enable criminals and terrorists to operate with impunity.

This is not true. Sure, he’s obliged to say the absolute truth, in court. He’s also obliged to be truthful in general about facts in his personal life, such as not lying on his tax return (the sort of thing that can get lawyers disbarred).

But he’s not obliged to tell his spouse his honest opinion whether that new outfit makes them look fat. Likewise, Rosenstein knows his opinion on public policy doesn’t fall into this category. He can say with impunity that either global warming doesn’t exist, or that it’ll cause a biblical deluge within 5 years. Both are factually untrue, but it’s not going to get him fired.

And this particular claim is also exaggerated bunk. While everyone agrees encryption makes law enforcement’s job harder than with backdoors, nobody honestly believes it can “disable” law enforcement. While everyone agrees that encryption helps terrorists, nobody believes it can enable them to act with “impunity”.

I feel bad here. It’s a terrible thing to question your opponent’s character this way. But Rosenstein made this unavoidable when he clearly, with no ambiguity, put his integrity as Deputy Attorney General on the line behind the statement that “going dark threatens to disable law enforcement and enable criminals and terrorists to operate with impunity”. I feel it’s a bald face lie, but you don’t need to take my word for it. Read his own words yourself and judge his integrity.

Conclusion

Rosenstein’s speech includes repeated references to ideas like “oath”, “honor”, and “duty”. It reminds me of Col. Jessup’s speech in the movie “A Few Good Men”.

If you’ll recall, it was rousing speech, “you want me on that wall” and “you use words like honor as a punchline”. Of course, since he was violating his oath and sending two privates to death row in order to avoid being held accountable, it was Jessup himself who was crapping on the concepts of “honor”, “oath”, and “duty”.

And so is Rosenstein. He imagines himself on that wall, doing albeit terrible things, justified by his duty to protect citizens. He imagines that it’s he who is honorable, while the rest of us not, even has he utters bald faced lies to further his own power and authority.

We activists oppose crypto backdoors not because we lack honor, or because we are criminals, or because we support terrorists and child molesters. It’s because we value privacy and government officials who get corrupted by power. It’s not that we fear Trump becoming a dictator, it’s that we fear bureaucrats at Rosenstein’s level becoming drunk on authority — which Rosenstein demonstrably has. His speech is a long train of corrupt ideas pursuing the same object of despotism — a despotism we oppose.

In other words, we oppose crypto backdoors because it’s not a tool of law enforcement, but a tool of despotism.

Is DefCon Wifi safe?

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/07/is-defcon-wifi-safe.html

DEF CON is the largest U.S. hacker conference that takes place every summer in Las Vegas. It offers WiFi service. Is it safe?

Probably.

The trick is that you need to download the certificate from https://wifireg.defcon.org and import it into your computer. They have instructions for all your various operating systems. For macOS, it was as simple as downloading “dc25.mobileconfig” and importing it.

I haven’t validated the DefCon team did the right thing for all platforms, but I know that safety is possible. If a hacker could easily hack into arbitrary WiFi, then equipment vendors would fix it. Corporations widely use WiFi — they couldn’t do this if it weren’t safe.

The first step in safety is encryption, obviously. WPA does encryption well, you you are good there.

The second step is authentication — proving that the access-point is who it says it is. Otherwise, somebody could setup their own access-point claiming to be “DefCon”, and you’d happily connect to it. Encrypted connect to the evil access-point doesn’t help you. This is what the certificate you download does — you import it into your system, so that you’ll trust only the “DefCon” access-point that has the private key.

That’s not to say you are completely safe. There’s a known vulnerability for the Broadcom WiFi chip imbedded in many devices, including iPhone and Android phones. If you have one of these devices, you should either upgrade your software with a fix or disable WiFi.

There may also be unknown vulnerabilities in WiFi stacks. the Broadcom bug shows that after a couple decades, we still haven’t solved the problem of simple buffer overflows in WiFi stacks/drivers. Thus, some hacker may have an unknown 0day vulnerability they are using to hack you.

Of course, this can apply to any WiFi usage anywhere. Frankly, if I had such an 0day, I wouldn’t use it at DefCon. Along with black-hat hackers DefCon is full of white-hat researchers monitoring the WiFi — looking for hackers using exploits. They are likely to discover the 0day and report it. Thus, I’d rather use such 0-days in international airpots, catching business types, getting into their company secrets. Or, targeting government types.

So it’s impossible to guarantee any security. But what the DefCon network team bas done looks right, the same sort of thing corporations do to secure themselves, so you are probably secure.

On the other hand, don’t use “DefCon-Open” — not only is it insecure, there are explicitly a ton of hackers spying on it at the “Wall of Sheep” to point out the “sheep” who don’t secure their passwords.

[$] User=0day considered harmful in systemd

Post Syndicated from jake original https://lwn.net/Articles/727490/rss

Validating user input is a long-established security best practice, but
there can be differences of opinion about what should be done when that
validation fails. A recently reported bug in systemd has fostered a
discussion on that topic; along the way there has also been discussion
about how much
validation systemd should actually be doing and how much should be left up
to the underlying distribution. The controversy all revolves around
usernames that systemd does not accept, but that some distributions (and
POSIX)
find to
be perfectly acceptable.

Some non-lessons from WannaCry

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/some-non-lessons-from-wannacry.html

This piece by Bruce Schneier needs debunking. I thought I’d list the things wrong with it.

The NSA 0day debate

Schneier’s description of the problem is deceptive:

When the US government discovers a vulnerability in a piece of software, however, it decides between two competing equities. It can keep it secret and use it offensively, to gather foreign intelligence, help execute search warrants, or deliver malware. Or it can alert the software vendor and see that the vulnerability is patched, protecting the country — and, for that matter, the world — from similar attacks by foreign governments and cybercriminals. It’s an either-or choice.

The government doesn’t “discover” vulnerabilities accidentally. Instead, when the NSA has a need for something specific, it acquires the 0day, either through internal research or (more often) buying from independent researchers.

The value of something is what you are willing to pay for it. If the NSA comes across a vulnerability accidentally, then the value to them is nearly zero. Obviously such vulns should be disclosed and fixed. Conversely, if the NSA is willing to pay $1 million to acquire a specific vuln for imminent use against a target, the offensive value is much greater than the fix value.

What Schneier is doing is deliberately confusing the two, combing the policy for accidentally found vulns with deliberately acquired vulns.

The above paragraph should read instead:

When the government discovers a vulnerability accidentally, it then decides to alert the software vendor to get it patched. When the government decides it needs as vuln for a specific offensive use, it acquires one that meets its needs, uses it, and keeps it secret. After spending so much money acquiring an offensive vuln, it would obviously be stupid to change this decision and not use it offensively.

Hoarding vulns

Schneier also says the NSA is “hoarding” vulns. The word has a couple inaccurate connotations.
One connotation is that the NSA is putting them on a heap inside a vault, not using them. The opposite is true: the NSA only acquires vulns it for which it has an active need. It uses pretty much all the vulns it acquires. That can be seen in the ShadowBroker dump, all the vulns listed are extremely useful to attackers, especially ETERNALBLUE. Efficiency is important to the NSA. Your efficiency is your basis for promotion. There are other people who make their careers finding waste in the NSA. If you are hoarding vulns and not using them, you’ll quickly get ejected from the NSA.
Another connotation is that the NSA is somehow keeping the vulns away from vendors. That’s like saying I’m hoarding naked selfies of myself. Yes, technically I’m keeping them away from you, but it’s not like they ever belong to you in the first place. The same is true the NSA. Had it never acquired the ETERNALBLUE 0day, it never would’ve been researched, never found.

The VEP

Schneier describes the “Vulnerability Equities Process” or “VEP”, a process that is supposed to manage the vulnerabilities the government gets.

There’s no evidence the VEP process has ever been used, at least not with 0days acquired by the NSA. The VEP allows exceptions for important vulns, and all the NSA vulns are important, so all are excepted from the process. Since the NSA is in charge of the VEP, of course, this is at the sole discretion of the NSA. Thus, the entire point of the VEP process goes away.

Moreover, it can’t work in many cases. The vulns acquired by the NSA often come with clauses that mean they can’t be shared.

New classes of vulns

One reason sellers forbid 0days from being shared is because they use new classes of vulnerabilities, such that sharing one 0day will effectively ruin a whole set of vulnerabilities. Schneier poo-poos this because he doesn’t see new classes of vulns in the ShadowBroker set.
This is wrong for two reasons. The first is that the ShadowBroker 0days are incomplete. There’s no iOS exploits, for example, and we know that iOS is a big target of the NSA.
Secondly, I’m not sure we’ve sufficiently analyzed the ShadowBroker exploits yet to realize there may be a new class of vuln. It’s easy to miss the fact that a single bug we see in the dump may actually be a whole new class of vulnerability. In the past, it’s often been the case that a new class was named only after finding many examples.
In any case, Schneier misses the point denying new classes of vulns exist. He should instead use the point to prove the value of disclosure, that instead of playing wack-a-mole fixing bugs one at a time, vendors would be able to fix whole classes of bugs at once.

Rediscovery

Schneier cites two studies that looked at how often vulnerabilities get rediscovered. In other words, he’s trying to measure the likelihood that some other government will find the bug and use it against us.
These studies are weak, scarcely better than anecdotal evidence. Schneier’s own study seems almost unrelated to the problem, and the Rand’s study cannot be replicated, as it relies upon private data. Also, there is little differentiation between important bugs (like SMB/MSRPC exploits and full-chain iOS exploits) and lesser bugs.
Whether from the Rand study or from anecdotes, we have good reason to believe that the longer an 0day exists, the less likely it’ll be rediscovered. Schneier argues that vulns should only be used for 6 months before being disclosed to a vendor. Anecdotes suggest otherwise, that if it hasn’t been rediscovered in the first year, it likely won’t ever be.
The Rand study was overwhelmingly clear on the issue that 0days are dramatically more likely to become obsolete than be rediscovered. The latest update to iOS will break an 0day, rather than somebody else rediscovering it. Win10 adoption will break older SMB exploits faster than rediscovery.
In any case, this post is about ETERNALBLUE specifically. What we learned from this specific bug is that it was used for at least 5 year without anybody else rediscovering it (before it was leaked). Chances are good it never would’ve been rediscovered, just made obsolete by Win10.

Notification is notification

All disclosure has the potential of leading to worms like WannaCry. The Conficker worm of 2008, for example, was written after Microsoft patched the underlying vulnerability.
Thus, had the NSA disclosed the bug in the normal way, chances are good it still would’ve been used for worming ransomware.
Yes, WannaCry had a head-start because ShadowBrokers published a working exploit, but this doesn’t appear to have made a difference. The Blaster worm (the first worm to compromise millions of computers) took roughly the same amount of time to create, and almost no details were made public about the vulnerability, other than the fact it was patched. (I know from personal experience — we used diff to find what changed in the patch in order to reverse engineer the 0day).
In other words, the damage the NSA is responsible for isn’t really the damage that came after it was patched — that was likely to happen anyway, as it does with normal vuln disclosure. Instead, the only damage the NSA can truly be held responsible for is the damage ahead of time, such as the months (years?) the ShadowBrokers possessed the exploits before they were patched.

Disclosed doesn’t mean fixed

One thing we’ve learned from 30 years of disclosure is that vendors ignore bugs.
We’ve gotten to the state where a few big companies like Microsoft and Apple will actually fix bugs, but the vast majority of vendors won’t. Even Microsoft and Apple have been known to sit on tricky bugs for over a year before fixing them.
And the only reason Microsoft and Apple have gotten to this state is because we, the community, bullied them into it. When we disclose bugs to them, we give them a deadline when we make the bug public, whether or not its been fixed.
The same goes for the NSA. If they quietly disclose bugs to vendors, in general, they won’t be fixed unless the NSA also makes the bug public within a certain time frame. Either Schneier has to argue that the NSA should do such public full-disclosures, or argue that disclosures won’t always lead to fixes.

Replacement SMB/MSRPC

The ETERNALBLUE vuln is so valuable to the NSA that it’s almost certainly seeking a replacement.
Again, I’m trying to debunk the impression Schneier tries to form that somehow the NSA stumbled upon ETERNALBLUE by accident to begin with. The opposite is true: remote exploits for the SMB (port 445) or MSRPC (port 135) services are some of the most valuable vulns, and the NSA will work hard to acquire them.

That it was leaked

The only issue here is that the 0day leaked. If the NSA can’t keep it’s weaponized toys secret, then maybe it shouldn’t have them.
Instead of processing this new piece of information, which is important, Schneier takes this opportunity to just re-hash the old inaccurate and deceptive VEP debate.

Conclusion

Except for a tiny number of people working for the NSA, none of us really know what’s going on with 0days inside government. Schneier’s comments seem more off-base than most. Like all activists, he deliberately uses language to deceive rather than explain (like “discover” instead of “acquire”). Like all activists, he seems obsessed with the VEP, even though as far as anybody can tell, it’s not used for NSA acquired vulns. He deliberate ignores things he should be an expert in, such as how all patches/disclosures sometimes lead to worms/exploits, and how not all disclosure leads to fixes.

Some confusing language in the 0day debate

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/03/some-confusing-language-in-0day-debate.html

As revealed in last week’s CIA #Vault7 leaks, the CIA has some 0days. This has ignited the debate about whether organizations like the CIA should be disclosing these 0days so that vendors can fix them, rather than “stockpiling” them. There seems to be some confusion about language.

Stockpile

The word “stockpile” has multiple connotations, as shown below:

This distorts the debate. Using the word “stockpile” strongly implies “reserve for use” at some time in the future. This prejudices the debate. If the the 0day is sitting on a shelf somewhere not being used, then it apparently has little value for offense, and thus, should be disclosed/patched for defense.

The truth is that that government does not buy 0days to sit on the shelf. With few exceptions, it buys 0days because it plans to use them in an offensive operation. This was described in that recent RAND report:

It’s the sellers who might keep 0days on the shelf, because the buyers have no immediate need. It’s not the government buyers who are stockpiling.

Words like “stockpiling”, “amassing”, or “hoarding” also bring the connotation that the number is too big. Words like “hoarding” bring the connotation that the government is doing something to keep the 0days away from others, preventing them from finding them, too.

Neutral terms would be more accurate, such as “acquiring” 0days, or having a “collection” 0days.

Find 0days

People keep describing the government as “finding” 0days. The word has two different meanings:

We are talking about two different policies here, one where the government finds 0day by chance, and one where they obtain 0days by effort.

Numerous articles quote Michael Daniel, former cyberczar under Obama, as claiming their default policy was to disclose 0days they find. What he meant was those found by chance. That doesn’t apply to vulnerabilities researched/bought by the CIA/NSA. Obviously, if you’ve got a target (like described above), and you buy an 0day to attack that target, you are going to use it. You aren’t going to immediately disclose it, thereby making it useless for the purpose for which you bought it.

Michael Daniels is typical government speak: while their official policy was to disclose, their practice was to not disclose.

Using the word “find” prejudices the conversation, like “stockpiling”, making it look like the government has no particular interest in an 0day, and is just hoarding it out of spite. What the government actually does is “buy” 0days from outsiders, or “researches” 0days themselves. Either way, they put a lot of effort into it.

0day

In this context, there are actually two very different types of 0day: those the government use for offense, and all the rest.

We think of the NSA/CIA as superspies, but really the opposite is true. Their internal processes kill creativity, and what they really want are weaponized/operationalized exploits they can give to ill-trained cyber-warriors. As that RAND paper also indicates, they have other strange needs, such as how it’s really important they don’t get caught. They’d rather forgo hacking a target they know they can hack, rather than use a noisy 0day.

Also, as mentioned above, they have a specific target in mind when they buy a bug. While the NSA/CIA has 0days for mainstream products like iPhone and Android, the bulk is for products you’ve never heard of. For example, if they learn that ISIS is using a specific model of router from Huawei, they’ll go out and buy one, pull the firmware, reverse engineer it, and find an 0day. I pick “Huawei” routers here, because they are rare in the United States, but common in the areas the NSA wants to hack.

The point is this: the “0day” discussion misses what’s going really going on with the government weaponized/offensive 0days. They are apples-to-oranges 0days.

Conclusion

Recently, there has been a lot of discussion about the government finding and stockpiling 0days. The debate is off-kilter because the words don’t mean what people think they mean.

Some notes on the RAND 0day report

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/03/some-notes-on-rand-0day-report.html

The RAND Corporation has a research report on the 0day market [ * ]. It’s pretty good. They talked to all the right people. It should be considered the seminal work on the issue. They’ve got the pricing about right ($1 million for full chain iPhone exploit, but closer to $100k for others). They’ve got the stats about right (5% chance somebody else will discover an exploit).

Yet, they’ve got some problems, namely phrasing the debate as activists want, rather than a neutral view of the debate.

The report frequently uses the word “stockpile”. This is a biased term used by activists. According to the dictionary, it means:

a large accumulated stock of goods or materials, especially one held in reserve for use at a time of shortage or other emergency.

Activists paint the picture that the government (NSA, CIA, DoD, FBI) buys 0day to hold in reserve in case they later need them. If that’s the case, then it seems reasonable that it’s better to disclose/patch the vuln then let it grow moldy in a cyberwarehouse somewhere.

But that’s not how things work. The government buys vulns it has immediate use for (primarily). Almost all vulns it buys are used within 6 months. Most vulns in its “stockpile” have been used in the previous year. These cyberweapons are not in a warehouse, but in active use on the front lines.

This is top secret, of course, so people assume it’s not happening. They hear about no cyber operations (except Stuxnet), so they assume such operations aren’t occurring. Thus, they build up the stockpiling assumption rather than the active use assumption.

If the RAND wanted to create an even more useful survey, they should figure out how many thousands of times per day our government (NSA, CIA, DoD, FBI) exploits 0days. They should characterize who they target (e.g. terrorists, child pornographers), success rate, and how many people they’ve killed based on 0days. It’s this data, not patching, that is at the root of the policy debate.

That 0days are actively used determines pricing. If the government doesn’t have immediate need for a vuln, it won’t pay much for it, if anything at all. Conversely, if the government has urgent need for a vuln, it’ll pay a lot.

Let’s say you have a remote vuln for Samsung TVs. You go to the NSA and offer it to them. They tell you they aren’t interested, because they see no near term need for it. Then a year later, spies reveal ISIS has stolen a truckload of Samsung TVs, put them in all the meeting rooms, and hooked them to Internet for video conferencing. The NSA then comes back to you and offers $500k for the vuln.

Likewise, the number of sellers affects the price. If you know they desperately need the Samsung TV 0day, but they are only offering $100k, then it likely means that there’s another seller also offering such a vuln.

That’s why iPhone vulns are worth $1 million for a full chain exploit, from browser to persistence. They use it a lot, it’s a major part of ongoing cyber operations. Each time Apple upgrades iOS, the change breaks part of the existing chain, and the government is keen on getting a new exploit to fix it. They’ll pay a lot to the first vuln seller who can give them a new exploit.

Thus, there are three prices the government is willing to pay for an 0day (the value it provides to the government):

  • the price for an 0day they will actively use right now (high)
  • the price for an 0day they’ll stockpile for possible use in the future (low)
  • the price for an 0day they’ll disclose to the vendor to patch (very low)

That these are different prices is important to the policy debate. When activists claim the government should disclose the 0day they acquire, they are ignoring the price the 0day was acquired for. Since the government actively uses the 0day, they are acquired for a high-price, with their “use” value far higher than their “patch” value. It’s an absurd argument to make that they government should then immediately discard that money, to pay “use value” prices for “patch” results.

If the policy becomes that the NSA/CIA should disclose/patch the 0day they buy, it doesn’t mean business as usual acquiring vulns. It instead means they’ll stop buying 0day.

In other words, “patching 0day” is not an outcome on either side of the debate. Either the government buys 0day to use, or it stops buying 0day. In neither case does patching happen.

The real argument is whether the government (NSA, CIA, DoD, FBI) should be acquiring, weaponizing, and using 0day in the first place. It demands that we unilaterally disarm our military, intelligence, and law enforcement, preventing them from using 0days against our adversaries while our adversaries continue to use 0days against us.

That’s the gaping hole in both the RAND paper and most news reporting of this controversy. They characterize the debate the way activists want, as if the only question is the value of patching. They avoid talking about unilateral cyberdisarmament, even though that’s the consequence of the policy they are advocating. They avoid comparing the value of 0days to our country for active use (high) compared to the value to to our country for patching (very low).

Conclusion

It’s nice that the RAND paper studied the value of patching and confirmed it’s low, that only around 5% of our cyber-arsenal is likely to be found by others. But it’d be nice if they also looked at the point of view of those actively using 0days on a daily basis, rather than phrasing the debate the way activists want.

Only lobbyist and politicians matter, not techies

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/03/only-lobbyist-and-politicians-matter.html

The NSA/CIA will only buy an 0day if they can use it. They can’t use it if they disclose the bug.

I point this out, yet again, because of this WaPo article [*] built on the premise that the NSA/CIA spend millions of dollars on 0day they don’t use, while unilaterally disarming tiself. Since that premise is false, the entire article is false. It’s the sort of article you get when all you interview are Washington D.C. lobbyists and Washington D.C. politicians — and no outside experts.

It quotes former cyberczar (under Obama) Michael Daniel explaining that the “default assumption” is to disclose 0days that the NSA/CIA get. This is a Sean Spicer style lie. He’s paid to say this, but it’s not true. The NSA/CIA only buy 0day if they can use it. They won’t buy 0day if the default assumption is that they will disclose it. QED: the default assumption of such 0day is they won’t disclose them.

The story quotes Ben Wizner of the ACLU saying that we should patch 0days instead of using them. Patching isn’t an option. If we aren’t using them, then we aren’t buying them, and hence, there are no 0days to patch. The two options are to not buy 0days at all (and not patch) or buy to use them (and not patch). Either way, patching doesn’t happen.

Wizner didn’t actually say “use them”. He said “stockpiling” them, a word that means “hold in reserve for use in the future”. That’s not what the NSA/CIA does. They buy 0days to use, now. They’ve got budgets and efficiency ratings. They don’t buy 0days which they can’t use in the near future. In other words, Wizner paints the choice between an 0day that has no particular value to the government, and one would have value being patched.

The opposite picture is true. Almost all the 0days possessed by the NSA/CIA have value, being actively used against our adversaries right now. Conversely, patching an 0day provides little value for defense. Nobody else knew about the 0day anyway (that’s what 0day means), so nobody was in danger, so nobody was made safer by patching it.

Wizner and Snowden are quoted in the article that somehow the NSA/CIA is “maintaining vulnerabilities” and “keeping the holes open”. This phrasing is deliberately misleading. The NSA/CIA didn’t create the holes. They aren’t working to keep them open. If somebody else finds the same 0day hole and tells the vendor (like Apple), then the NSA/CIA will do nothing to stop them. They just won’t work to close the holes.

Activists like Wizner and Snowden deliberate mislead on the issue because they can’t possibly win a rational debate. The government is not going to continue to spend millions of dollars on buying 0days just to close them, because everyone agrees the value proposition is crap, that the value of fixing yet another iPhone hole is not worth the $1 million it’ll cost, and do little to stop Russians from finding an unrelated hole. Likewise, while the peacenicks (rightfully, in many respects) hate the militarization of cyberspace, they aren’t going to win the argument that the NSA/CIA should unilaterally disarm themselves. So instead they’ve tried to morph the debate into some crazy argument that makes no sense.

This is the problem with Washington D.C. journalism. It presumes the only people who matter are those in Washington, either the lobbyists of one position, or government defenders of another position. At no point did they go out and talk to technical experts, such as somebody who has discovered, weaponized, used an 0day exploit. So they write articles premised on the fact that the NSA/CIA, out of their offensive weapons budget, will continue to buy 0days that are immediately patched and fixed without ever being useful.

Some comments on the Wikileaks CIA/#vault7 leak

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/03/some-comments-on-wikileaks-ciavault7.html

I thought I’d write up some notes about the Wikileaks CIA “#vault7” leak. This post will be updated frequently over the next 24 hours.

The CIA didn’t remotely hack a TV. The docs are clear that they can update the software running on the TV using a USB drive. There’s no evidence of them doing so remotely over the Internet. If you aren’t afraid of the CIA breaking in an installing a listening device, then you should’t be afraid of the CIA installing listening software.

The CIA didn’t defeat Signal/WhatsApp encryption. The CIA has some exploits for Android/iPhone. If they can get on your phone, then of course they can record audio and screenshots. Technically, this bypasses/defeats encryption — but such phrases used by Wikileaks are highly misleading, since nothing related to Signal/WhatsApp is happening. What’s happening is the CIA is bypassing/defeating the phone. Sometimes. If they’ve got an exploit for it, or can trick you into installing their software.

There’s no overlap or turf war with the NSA. The NSA does “signals intelligence”, so they hack radios and remotely across the Internet. The CIA does “humans intelligence”, so they hack locally, with a human. The sort of thing they do is bribe, blackmail, or bedazzle some human “asset” (like a technician in a nuclear plant) to stick a USB drive into a slot. All the various military, law enforcement, and intelligence agencies have hacking groups to help them do their own missions.

The CIA isn’t more advanced than the NSA. Most of this dump is child’s play, simply malware/trojans cobbled together from bits found on the Internet. Sometimes they buy more advanced stuff from contractors, or get stuff shared from the NSA. Technologically, they are far behind the NSA in sophistication and technical expertise.

The CIA isn’t hoarding 0days. For one thing, few 0days were mentioned at all. The CIA’s techniques rely upon straightforward hacking, not super secret 0day hacking Second of all, they aren’t keeping 0days back in a vault somewhere — if they have 0days, they are using them.

The VEP process is nonsense. Activists keep mentioning the “vulnerability equities process”, in which all those interested in 0days within the government has a say in what happens to them, with the eventual goal that they be disclosed to vendors. The VEP is nonsense. The activist argument is nonsense. As far as I can tell, the VEP is designed as busy work to keep people away from those who really use 0days, such as the NSA and the CIA. If they spend millions of dollars buying 0days because it has that value in intelligence operations, they aren’t going to destroy that value by disclosing to a vendor. If VEP forces disclosure, disclosure still won’t happen, the NSA will simply stop buying vulns.

But they’ll have to disclose the 0days. Any 0days that were leaked to Wikileaks are, of course, no longer secret. Thus, while this leak isn’t an argument for unilateral disarmament in cyberspace, the CIA will have to disclose to vendor the vulns that are now in Russian hands, so that they can be fixed.

There’s no false flags. In several places, the CIA talks about making sure that what they do isn’t so unique, so it can’t be attributed to them. However, Wikileaks’s press release hints that the “UMBRAGE” program is deliberately stealing techniques from Russia to use as a false-flag operation. This is nonsense. For example, the DNC hack attribution was live command-and-control servers simultaneously used against different Russian targets — not a few snippets of code. [More here]

This hurts the CIA a lot. Already, one AV researcher has told me that a virus they once suspected came from the Russians or Chinese can now be attributed to the CIA, as it matches the description perfectly to something in the leak. We can develop anti-virus and intrusion-detection signatures based on this information that will defeat much of what we read in these documents. This would put a multi-year delay in the CIA’s development efforts. Plus, it’ll now go on a witch-hunt looking for the leaker, which will erode morale. Update: Three extremely smart and knowledgeable people who I respect disagree, claiming it won’t hurt the CIA a lot. I suppose I’m focusing on “hurting the cyber abilities” of the CIA, not the CIA as a whole, which mostly is non-cyber in function.

The CIA is not cutting edge. A few days ago, Hak5 started selling “BashBunny”, a USB hacking tool more advanced than the USB tools in the leak. The CIA seems to get most of their USB techniques from open-source projects, such Travis Goodpseeds “GoodFET” project.

The CIA isn’t spying on us. Snowden revealed how the NSA was surveilling all Americans. Nothing like that appears in the CIA dump. It’s all legitimate spy stuff (assuming you think spying on foreign adversaries is legitimate).

Update #2: How is hacking cars and phones not SIGINT (which is the NSA’s turf)?[*The answer is via physical access. For example, they might have a device that plugs into the ODBII port on the car that quickly updates the firmware of the brakes. Think of it as normal spy activity (e.g. cutting a victim’s brakes), but now with cyber.

Update #3: Apple iPhone. My vague sense is that CIA is more concerned about decrypting iPhones they get physical access to, rather than remotely hacking them and installing malware. CIA is HUMINT and covert ops, meaning they’ll punch somebody in the face, grab their iPhone, and run, then take it back to their lab and decrypt it.


Cliché: Security through obscurity (again)

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/cliche-security-through-obscurity-again.html

This post keeps popping up in my timeline. It’s wrong. The phrase “security through/by security” has become such a cliché that it’s lost all meaning. When somebody says it, they are almost certainly saying a dumb thing, regardless if they support it or are trying to debunk it.

Let’s go back to first principles, namely Kerckhoff’s Principle from the 1800s that states cryptography should be secure even if everything is known about it except the key. In other words, there exists no double-secret military-grade encryption with secret algorithms. Today’s military crypto is public crypto.

Let’s apply this to port knocking. This is not a layer of obscurity, as proposed by the above post, but a layer of security. Applying Kerkhoff’s Principle, it should work even if everything is known about the port knocking algorithm except the sequence of ports being knocked.

Kerkhoff’s Principle is based on a few simple observations. Two relevant ones today are:

* things are not nearly as obscure as you think
* obscurity often impacts your friends more than your enemies

I (as an attacker) know that many sites use port knocking. Therefore, if I get no response from an IP address (which I have reason to know exists), then I’ll assume port knocking is hiding it. I know which port knocking techniques are popular. Or, sniffing at the local Starbucks, I might observe outgoing port knocking behavior, and know which sensitive systems I can attack later using the technique. Thus, though port knocking makes it look like a system doesn’t exist, this doesn’t fully hide a system from me. The security of the system should not rest on this obscurity.

Instead of an obscurity layer, port knocking a security layer. The security it provides is that it drives up the amount of effort an attacker needs to hack the system. Some use the opposite approach, whereby the firewall in front of a subnet responds with a SYN-ACK to every SYN. This likewise increases the costs of those doing port scans (like myself, who masscans the entire Internet), by making it look that all IP addresses and ports exist, not by hiding systems behind a layer of obscurity.

One plausible way of defeating a port knocking implementation is to simply scan all 64k ports many times. If you are looking for a sequence of TCP ports 1000, 5000, 2000, 4000, then you’ll see this sequence. You’ll see all sequences.

If the code for your implementation is open, then it’s easy for others to see this plausible flaw and point it out to you. You could fix this flaw by then forcing the sequence to reset every time it saw the first port, or to also listen for bad ports (ones not part of the sequence) that would likewise reset the sequence.

If your code is closed, then your friends can’t see this problem. But your enemies are still highly motivated. They might find your code, find the compiled implementation, or must just guess ways around your possible implementation. The chances that you, some random defender, is better at this than the combined effort of all your attackers is very small. Opening things up to your friends gives you a greater edge to combat your enemies.

Thus, applying Kerkoff’s Principle to this problem is that you shouldn’t rely upon the secrecy of your port knocking algorithm, or the fact that you are using port knocking in the first place.

The above post also discusses ssh on alternate ports. It points out that if an 0day is found in ssh, those who run the service on the default port of 22 will get hacked first, while those who run at odd ports, like 7837, will have time to patch their services before getting owned.

But this is just repeating the fallacy. It’s focusing only on the increase in difficulty to attackers, but ignoring the increase in difficulties to friends. Let’s say some new ssh 0day is announced. Everybody is going to rush to patch their servers. They are going to run tools like my masscan to quickly find everything listening on port 22, or a vuln scanner like Nessus. Everything on port 22 will quickly get patched. SSH servers running on port 7837, however, will not get patched. On the other other hand, Internet-wide scans like Shodan or the 2012 Internet Census may have already found that you are running ssh on port 7837. That means the attackers can quickly attack it with the latest 0day even while you, the defender, are slow to patch it.

Running ssh on alternate ports is certainly useful because, as the article points out, it dramatically cuts down on the noise that defenders have to deal with. If somebody is brute forcing passwords on port 7837, then that’s a threat worth paying more attention to than somebody doing the same at port 22. But this benefit is separate discussion from obscurity. Hiding an ssh server on an obscure port may thus be a good idea, but not because there is value to obscurity.

Thus, both port knocking and putting ssh on alternate ports are valid security strategies. However, once you mention the cliche “security by/through obscurity”, you add nothing useful to the mix.


Update: Response here.

Some technical notes on the PlayPen case

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/09/some-technical-notes-on-playpen-case.html

In March of 2015, the FBI took control of a Tor onion childporn website (“PlayPen”), then used an 0day exploit to upload malware to visitors’s computers, to identify them. There is some controversy over the warrant they used, and government mass hacking in general. However, much of the discussion misses some technical details, which I thought I’d discuss here.

IP address
In a post on the case, Orin Kerr claims:

retrieving IP addresses is clearly a search

He is wrong, at least, in the general case. Uploading malware to gather other things (hostname, username, MAC address) is clearly a search. But discovering the IP address is a different thing.
Today’s homes contain many devices behind a single router. The home has only one public IP address, that of the router. All the other devices have local IP addresses. The router then does network address translation (NAT) in order to convert outgoing traffic to all use the public IP address.
The FBI sought the public IP address of the NAT/router, not the local IP address of the perp’s computer. The malware (“NIT”) didn’t search the computer for the IP address. Instead the NIT generated network traffic, destined to the FBI’s computers. The FBI discovered the suspect’s public IP address by looking at their own computers.
Historically, there have been similar ways of getting this IP address (from a Tor hidden user) without “hacking”. In the past, Tor used to leak DNS lookups, which would often lead to the user’s ISP, or to the user’s IP address itself. Another technique would be to provide rich content files (like PDF) or video files that the user would have to be downloaded to view, and which then would contact the Internet (contacting the FBI’s computers) themselves bypassing Tor.
Since the Fourth Amendment is about where the search happens, and not what is discovered, it’s not a search to find the IP address in packets arriving at FBI servers. How the FBI discovered the IP address may be a search (running malware on the suspect’s computer), but the public IP address itself doesn’t necessarily mean a search happened.

Of course, uploading malware just to transmit packets to an FBI server, getting the IP address from the packets, it’s still problematic. It’s gotta be something that requires a warrant, even though it’s not precisely the malware searching the machine for its IP address.

In any event, if not for the IP address, then PlayPen searches still happened for the hostname, username, and MAC address. Imagine the FBI gets a search warrant, shows up at the suspect’s house, and finds no child porn. They then look at the WiFi router, and find that suspected MAC address is indeed connected. They then use other tools to find that the device with that MAC address is located in the neighbor’s house — who has been piggybacking off the WiFi.
It’s a pre-crime warrant (#MinorityReport)
The warrant allows the exploit/malware/search to be used whenever somebody logs in with a username and password.
The key thing here is that the warrant includes people who have not yet created an account on the server at the time the warrant is written. They will connect, create an account, log in, then start accessing the site.
In other words, the warrant includes people who have never committed a crime when the warrant was issued, but who first commit the crime after the warrant. It’s a pre-crime warrant. 
Sure, it’s possible in any warrant to catch pre-crime. For example, a warrant for a drug dealer may also catch a teenager making their first purchase of drugs. But this seems quantitatively different. It’s not targeting the known/suspected criminal — it’s targeting future criminals.
This could easily be solved by limiting the warrant to only accounts that have already been created on the server.
It’s more than an anticipatory warrant

People keep saying it’s an anticipatory warrant, as if this explains everything.

I’m not a lawyer, but even I can see that this explains only that the warrant anticipates future probable cause. “Anticipatory warrant” doesn’t explain that the warrant also anticipates future place to be searched. As far as I can tell, “anticipatory place” warrants don’t exist and are a clear violation of the Fourth Amendment. It makes it look like a “general warrant”, which the Fourth Amendment was designed to prevent.

Orin’s post includes some “unknown place” examples — but those specify something else in particular. A roving wiretap names a person, and the “place” is whatever phone they use. In contrast, this PlayPen warrant names no person. Orin thinks that the problem may be that more than one person is involved, but he is wrong. A warrant can (presumably) name multiple people, or you can have multiple warrants, one for each person. Instead, the problem here is that no person is named. It’s not “Rob’s computer”, it’s “the computer of whoever logs in”. Even if the warrant were ultimately for a single person, it’d still be problematic because the person is not identified.
Orin cites another case, where the FBI places a beeper into a package in order to track it. The place, in this case, is the package. Again, this is nowhere close to this case, where no specific/particular place is mentioned, only a type of place. 
This could easily have been resolved. Most accounts were created before the warrant was issued. The warrant could simply have listed all the usernames, saying the computers of those using these accounts are the places to search. It’s a long list of usernames (1,500?), but if you can’t include them all in a single warrant, in this day and age of automation, I’d imagine you could easily create 1,500 warrants.
It’s malware

As a techy, the name for what the FBI did is “hacking”, and the name for their software is “malware” not “NIT”. The definitions don’t change depending upon who’s doing it and for what purpose. That the FBI uses weasel words to distract from what it’s doing seems like a violation of some sort of principle.
Conclusion

I am not a lawyer, I am a revolutionary. I care less about precedent and more about how a Police State might abuse technology. That a warrant can be issued whose condition is similar “whoever logs into the server” seems like a scary potential for abuse. That a warrant can be designed to catch pre-crime seems even scarier, like science fiction. That a warrant might not be issued for something called “malware”, but would be issued for something called “NIT”, scares me the most.
This warrant could easily have been narrower. It could have listed all the existing account holders. It could’ve been even narrower, for account holders where the server logs prove they’ve already downloaded child porn.
Even then, we need to be worried about FBI mass hacking. I agree that FBI has good reason to keep the 0day secret, and that it’s not meaningful to the defense. But in general, I think courts should demand an overabundance of transparency — the police could be doing something nefarious, so the courts should demand transparency to prevent that.

Notes on that StJude/MuddyWatters/MedSec thing

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/notes-on-that-stjudemuddywattersmedsec.html

I thought I’d write up some notes on the StJude/MedSec/MuddyWaters affair. Some references: [1] [2] [3] [4].

The story so far

tl;dr: hackers drop 0day on medical device company hoping to profit by shorting their stock

St Jude Medical (STJ) is one of the largest providers of pacemakers (aka. cardiac devices) in the country, around ~$2.5 billion in revenue, which accounts for about half their business. They provide “smart” pacemakers with an on-board computer that talks via radio-waves to a nearby monitor that records the functioning of the device (and health data). That monitor, “[email protected]“, then talks back up to St Jude (via phone lines, 3G cell phone, or wifi). Pretty much all pacemakers work that way (my father’s does, although his is from a different vendor).

MedSec is a bunch of cybersecurity researchers (white-hat hackers) who have been investigating medical devices. In theory, their primary business is to sell their services to medical device companies, to help companies secure their devices. Their CEO is Justine Bone, a long-time white-hat hacker. Despite Muddy Waters garbling the research, there’s no reason to doubt that there’s quality research underlying all this.

Muddy Waters is an investment company known for investigating companies, finding problems like accounting fraud, and profiting by shorting the stock of misbehaving companies.

Apparently, MedSec did a survey of many pacemaker manufacturers, chose the one with the most cybersecurity problems, and went to Muddy Waters with their findings, asking for a share of the profits Muddy Waters got from shorting the stock.

Muddy Waters published their findings in [1] above. St Jude published their response in [2] above. They are both highly dishonest. I point that out because people want to discuss the ethics of using 0day to short stock when we should talk about the ethics of lying.

“Why you should sell the stock” [finance issues]

In this section, I try to briefly summarize Muddy Water’s argument why St Jude’s stock will drop. I’m not an expert in this area (though I do a bunch of investment), but they do seem flimsy to me.
Muddy Water’s argument is that these pacemakers are half of St Jude’s business, and that fixing them will first require recalling them all, then take another 2 year to fix, during which time they can’t be selling pacemakers. Much of the Muddy Waters paper is taken up explaining this, citing similar medical cases, and so on.
If at all true, and if the cybersecurity claims hold up, then yes, this would be good reason to short the stock. However, I suspect they aren’t true — and they are simply trying to scare people about long-term consequences allowing Muddy Waters to profit in the short term.
@selenakyle on Twitter suggests this interest document [4] about market-solutions to vuln-disclosure, if you are interested in this angle of things.
Update from @lippard: Abbot Labs agreed in April to buy St Jude at $85 a share (when St Jude’s stock was $60/share). Presumable, for this Muddy Waters attack on St Jude’s stock price to profit from anything more than a really short term stock drop (like dumping their short position today), Muddy Waters would have believe this effort will cause Abbot Labs to walk away from the deal. Normally, there are penalties for doing so, but material things like massive vulnerabilities in a product should allow Abbot Labs to walk away without penalties.

The 0day being dropped

Well, they didn’t actually drop 0day as such, just claims that 0day exists — that it’s been “demonstrated”. Reading through their document a few times, I’ve created a list of the 0day they found, to the granularity that one would expect from CVE numbers (CVE is group within the Department of Homeland security that assigns standard reference numbers to discovered vulnerabilities).

The first two, which can kill somebody, are the salient ones. The others are more normal cybersecurity issues, and may be of concern because they can leak HIPAA-protected info.

CVE-2016-xxxx: Pacemaker can be crashed, leading to death
Within a reasonable distance (under 50 feet) over several hours, pounding the pacemaker with malformed packets (either from an SDR or a hacked version of the [email protected] monitor), the pacemaker can crash. Sometimes such crashes will brick the device, other times put it into a state that may kill the patient by zapping the heart too quickly.

CVE-2016-xxxx: Pacemaker power can be drained, leading to death
Within a reasonable distance (under 50 feet) over several days, the pacemaker’s power can slowly be drained at the rate of 3% per hour. While the user will receive a warning from their [email protected] monitoring device that the battery is getting low, it’s possible the battery may be fully depleted before they can get to a doctor for a replacement. A non-functioning pacemaker may lead to death.

CVE-2016-xxxx: Pacemaker uses unauthenticated/unencrypted RF protocol
The above two items are possible because there is no encryption nor authentication in the wireless protocol, allowing any evildoer access to the pacemaker device or the monitoring device.

CVE-2016-xxxx: [email protected] contained hard-coded credentials and SSH keys
The password to connect to the St Jude network is the same for all device, and thus easily reverse engineered.

CVE-2016-xxxx: local proximity wand not required
It’s unclear in the report, but it seems that most other products require a wand in local promixity (inches) in order to enable communication with the pacemaker. This seems like a requirement — otherwise, even with authentication, remote RF would be able to drain the device in the person’s chest.

So these are, as far as I can tell, the explicit bugs they outline. Unfortunately, none are described in detail. I don’t see enough detail for any of these to actually be assigned a CVE number. I’m being generous here, trying to describe them as such, giving them the benefit of the doubt, there’s enough weasel language in there that makes me doubt all of them. Though, if the first two prove not to be reproducible, then there will be a great defamation case, so I presume those two are true.

The movie/TV plot scenarios

So if you wanted to use this as a realistic TV/movie plot, here are two of them.
#1 You (the executive of the acquiring company) are meeting with the CEO and executives of a smaller company you want to buy. It’s a family concern, and the CEO really doesn’t want to sell. But you know his/her children want to sell. Therefore, during the meeting, you pull out your notebook and an SDR device and put it on the conference room table. You start running the exploit to crash that CEO’s pacemaker. It crashes, the CEO grabs his/her chest, who gets carted off the hospital. The children continue negotiations, selling off their company.
#2 You are a hacker in Russia going after a target. After many phishing attempts, you finally break into the home desktop computer. From that computer, you branch out and connect to the [email protected] devices through the hard-coded password. You then run an exploit from the device, using that device’s own radio, to slowly drain the battery from the pacemaker, day after day, while the target sleeps. You patch the software so it no longer warns the user that the battery is getting low. The battery dies, and a few days later while the victim is digging a ditch, s/he falls over dead from heart failure.

The Muddy Water’s document is crap

There are many ethical issues, but the first should be dishonesty and spin of the Muddy Waters research report.

The report is clearly designed to scare other investors to drop St Jude stock price in the short term so that Muddy Waters can profit. It’s not designed to withstand long term scrutiny. It’s full of misleading details and outright lies.

For example, it keeps stressing how shockingly bad the security vulnerabilities are, such as saying:

We find STJ Cardiac Devices’ vulnerabilities orders of magnitude more worrying than the medical device hacks that have been publicly discussed in the past. 

This is factually untrue. St Jude problems are no worse than the 2013 issue where doctors disable the RF capabilities of Dick Cheney’s pacemaker in response to disclosures. They are no worse than that insulin pump hack. Bad cybersecurity is the norm for medical devices. St Jude may be among the worst, but not by an order-of-magnitude.

The term “orders of magnitude” is math, by the way, and means “at least 100 times worse”. As an expert, I claim these problems are not even one order of magnitude (10 times worse). I challenge MedSec’s experts to stand behind the claim that these vulnerabilities are at least 100 times worse than other public medical device hacks.

In many places, the language is wishy-washy. Consider this quote:

Despite having no background in cybersecurity, Muddy Waters has been able to replicate in-house key exploits that help to enable these attacks

The semantic content of this is nil. It says they weren’t able to replicate the attacks themselves. They don’t have sufficient background in cybersecurity to understand what they replicated.

Such language is pervasive throughout the document, things that aren’t technically lies, but which aren’t true, either.

Also pervasive throughout the document, repeatedly interjected for no reason in the middle of text, are statements like this, repeatedly stressing why you should sell the stock:

Regardless, we have little doubt that STJ is about to enter a period of protracted litigation over these products. Should these trials reach verdicts, we expect the courts will hold that STJ has been grossly negligent in its product design. (We estimate awards could total $6.4 billion.15)

I point this out because Muddy Waters obviously doesn’t feel the content of the document stands on its own, so that you can make this conclusion yourself. It instead feels the need to repeat this message over and over on every page.

Muddy Waters violation of Kerckhoff’s Principle

One of the most important principles of cyber security is Kerckhoff’s Principle, that more openness is better. Or, phrased another way, that trying to achieve security through obscurity is bad.

The Muddy Water’s document attempts to violate this principle. Besides the the individual vulnerabilities, it makes the claim that St Jude cybersecurity is inherently bad because it’s open. it uses off-the-shelf chips, standard software (line Linux), and standard protocols. St Jude does nothing to hide or obfuscate these things.

Everyone in cybersecurity would agree this is good. Muddy Waters claims this is bad.

For example, some of their quotes:

One competitor went as far as developing a highly proprietary embedded OS, which is quite costly and rarely seen

In contrast, the other manufacturers have proprietary RF chips developed specifically for their protocols

Again, as the cybersecurity experts in this case, I challenge MedSec to publicly defend Muddy Waters in these claims.

Medical device manufacturers should do the opposite of what Muddy Waters claims. I’ll explain why.

Either your system is secure or it isn’t. If it’s secure, then making the details public won’t hurt you. If it’s insecure, then making the details obscure won’t help you: hackers are far more adept at reverse engineering than you can possibly understand. Making things obscure, though, does stop helpful hackers (i.e. cybersecurity consultants you hire) from making your system secure, since it’s hard figuring out the details.

Said another way: your adversaries (such as me) hate seeing open systems that are obviously secure. We love seeing obscure systems, because we know you couldn’t possibly have validated their security.

The point is this: Muddy Waters is trying to profit from the public’s misconception about cybersecurity, namely that obscurity is good. The actual principle is that obscurity is bad.

St Jude’s response was no better

In response to the Muddy Water’s document, St Jude published this document [2]. It’s equally full of lies — the sort that may deserve a share holder lawsuit. (I see lawsuits galore over this). It says the following:

We have examined the allegations made by Capital and MedSec on August 25, 2016 regarding the safety and security of our pacemakers and defibrillators, and while we would have preferred the opportunity to review a detailed account of the information, based on available information, we conclude that the report is false and misleading.

If that’s true, if they can prove this in court, then that will mean they could win millions in a defamation lawsuit against Muddy Waters, and millions more for stock manipulation.

But it’s almost certainly not true. Without authentication/encryption, then the fact that hackers can crash/drain a pacemaker is pretty obvious, especially since (as claimed by Muddy Waters), they’ve successfully done it. Specifically, the picture on page 17 of the 34 page Muddy Waters document is a smoking gun of a pacemaker misbehaving.

The rest of their document contains weasel-word denials that may be technically true, but which have no meaning.

St. Jude Medical stands behind the security and safety of our devices as confirmed by independent third parties and supported through our regulatory submissions. 

Our software has been evaluated and assessed by several independent organizations and researchers including Deloitte and Optiv.

In 2015, we successfully completed an upgrade to the ISO 27001:2013 certification.

These are all myths of the cybersecurity industry. Conformance with security standards, such as ISO 27001:2013, has absolutely zero bearing on whether you are secure. Having some consultants/white-hat claim your product is secure doesn’t mean other white-hat hackers won’t find an insecurity.

Indeed, having been assessed by Deloitte is a good indicator that something is wrong. It’s not that they are incompetent (they’ve got some smart people working for them), but ultimately the way the security market works is that you demand of such auditors that the find reasons to believe your product is secure, not that they keep hunting until something is found that is insecure. It’s why outsiders, like MedSec, are better, because they strive to find why your product is insecure. The bigger the enemy, the more resources they’ll put into finding a problem.

It’s like after you get a hair cut, your enemies and your friends will have different opinions on your new look. Enemies are more honest.

The most obvious lie from the St Jude response is the following:

The report claimed that the battery could be depleted at a 50-foot range. This is not possible since once the device is implanted into a patient, wireless communication has an approximate 7-foot range. This brings into question the entire testing methodology that has been used as the basis for the Muddy Waters Capital and MedSec report.

That’s not how wireless works. With directional antennas and amplifiers, 7-feet easily becomes 50-feet or more. Even without that, something designed for reliable operation at 7-feet often works less reliably at 50-feet. There’s no cutoff at 7-feet within which it will work, outside of which it won’t.

That St Jude deliberately lies here brings into question their entire rebuttal. (see what I did there?)

ETHICS EHTICS ETHICS

First let’s discuss the ethics of lying, using weasel words, and being deliberately misleading. Both St Jude and Muddy Waters do this, and it’s ethically wrong. I point this out to uninterested readers who want to get at that other ethical issue. Clear violations of ethics we all agree interest nobody — but they ought to. We should be lambasting Muddy Waters for their clear ethical violations, not the unclear one.

So let’s get to the ethical issue everyone wants to discuss:

Is it ethical to profit from shorting stock while dropping 0day.

Let’s discuss some of the issues.

There’s no insider trading. Some people wonder if there are insider trading issues. There aren’t. While it’s true that Muddy Waters knew some secrets that nobody else knew, as long as they weren’t insider secrets, it’s not insider trading. In other words, only insiders know about a key customer contract won or lost recently. But, vulnerabilities researched by outsiders is still outside the company.

Watching a CEO walk into the building of a competitor is still outsider knowledge — you can trade on the likely merger, even though insider employees cannot.

Dropping 0day might kill/harm people. That may be true, but that’s never an ethical reason to not drop it. That’s because it’s not this one event in isolation. If companies knew ethical researchers would never drop an 0day, then they’d never patch it. It’s like the government’s warrantless surveillance of American citizens: the courts won’t let us challenge it, because we can’t prove it exists, and we can’t prove it exists, because the courts allow it to be kept secret, because revealing the surveillance would harm national intelligence. That harm may happen shouldn’t stop the right thing from happening.

In other words, in the long run, dropping this 0day doesn’t necessarily harm people — and thus profiting on it is not an ethical issue. We need incentives to find vulns. This moves the debate from an ethical one to more of a factual debate about the long-term/short-term risk from vuln disclosure.

As MedSec points out, St Jude has already proven itself an untrustworthy consumer of vulnerability disclosures. When that happens, the dropping 0day is ethically permissible for “responsible disclosure”. Indeed, that St Jude then lied about it in their response ex post facto justifies the dropping of the 0day.

No 0day was actually dropped here. In this case, what was dropped was claims of 0day. This may be good or bad, depending on your arguments. It’s good that the vendor will have some extra time to fix the problems before hackers can start exploiting them. It’s bad because we can’t properly evaluate the true impact of the 0day unless we get more detail — allowing Muddy Waters to exaggerate and mislead people in order to move the stock more than is warranted.

In other words, the lack of actual 0day here is the problem — actual 0day would’ve been better.

This 0day is not necessarily harmful. Okay, it is harmful, but it requires close proximity. It’s not as if the hacker can reach out from across the world and kill everyone (barring my movie-plot section above). If you are within 50 feet of somebody, it’s easier shooting, stabbing, or poisoning them.

Shorting on bad news is common. Before we address the issue whether this is unethical for cybersecurity researchers, we should first address the ethics for anybody doing this. Muddy Waters already does this by investigating companies for fraudulent accounting practice, then shorting the stock while revealing the fraud.

Yes, it’s bad that Muddy Waters profits on the misfortunes of others, but it’s others who are doing fraud — who deserve it. [Snide capitalism trigger warning] To claim this is unethical means you are a typical socialist who believe the State should defend companies, even those who do illegal thing, in order to stop illegitimate/windfall profits. Supporting the ethics of this means you are a capitalist, who believe companies should succeed or fail on their own merits — which means bad companies need to fail, and investors in those companies should lose money.

Yes, this is bad for cybersec research. There is constant tension between cybersecurity researchers doing “responsible” (sic) research and companies lobbying congress to pass laws against it. We see this recently how Detroit lobbied for DMCA (copyright) rules to bar security research, and how the DMCA regulators gave us an exemption. MedSec’s action means now all medical devices manufacturers will now lobby congress for rules to stop MedSec — and the rest of us security researchers. The lack of public research means medical devices will continue to be flawed, which is worse for everyone.

Personally, I don’t care about this argument. How others might respond badly to my actions is not an ethical constraint on my actions. It’s like speech: that others may be triggered into lobbying for anti-speech laws is still not constraint on what ethics allow me to say.

There were no lies or betrayal in the research. For me, “ethics” is usually a problem of lying, cheating, theft, and betrayal. As long as these things don’t happen, then it’s ethically okay. If MedSec had been hired by St Jude, had promised to keep things private, and then later disclosed them, then we’d have an ethical problem. Or consider this: frequently clients ask me to lie or omit things in pentest reports. It’s an ethical quagmire. The quick answer, by the way, is “can you make that request in writing?”. The long answer is “no”. It’s ethically permissible to omit minor things or do minor rewording, but not when it impinges on my credibility.

A life is worth about $10-million. Most people agree that “you can’t put value on a human life”, and that those who do are evil. The opposite is true. Should we spend more on airplane safety, breast cancer research, or the military budget to fight ISIS. Each can be measured in the number of lives saved. Should we spend more on breast cancer research, which affects people in their 30s, or solving heart disease, which affects people’s in their 70s? All these decisions means putting value on human life, and sometimes putting different value on human life. Whether you think it’s ethical, it’s the way the world works.

Thus, we can measure this disclosure of 0day in terms of potential value of life lost, vs. potential value of life saved.

Is this market manipulation? This is more of a legal question than an ethical one, but people are discussing it. If the data is true, then it’s not “manipulation” — only if it’s false. As documented in this post, there’s good reason to doubt the complete truth of what Muddy Waters claims. I suspect it’ll cost Muddy Waters more in legal fees in the long run than they could possibly hope to gain in the short run. I recommend investment companies stick to areas of their own expertise (accounting fraud) instead of branching out into things like cyber where they really don’t grasp things.

This is again bad for security research. Frankly, we aren’t a trusted community, because we claim the “sky is falling” too often, and are proven wrong. As this is proven to be market manipulation, as the stock recovers back to its former level, and the scary stories of mass product recalls fail to emerge, we’ll be blamed yet again for being wrong. That hurts are credibility.

On the other the other hand, if any of the scary things Muddy Waters claims actually come to pass, then maybe people will start heading our warnings.

Ethics conclusion: I’m a die-hard troll, so therefore I’m going to vigorously defend the idea of shorting stock while dropping 0day. (Most of you appear to think it’s unethical — I therefore must disagree with you).  But I’m also a capitalist. This case creates an incentive to drop harmful 0days — but it creates an even greater incentive for device manufacturers not to have 0days to begin with. Thus, despite being a dishonest troll, I do sincerely support the ethics of this.

Conclusion

The two 0days are about crashing the device (killing the patient sooner) or draining the battery (killin them later). Both attacks require hours (if not days) in close proximity to the target. If you can get into the local network (such as through phishing), you might be able to hack the [email protected] monitor, which is in close proximity to the target for hours every night.

Muddy Waters thinks the security problems are severe enough that it’ll destroy St Jude’s $2.5 billion pacemaker business. The argument is flimsy. St Jude’s retort is equally flimsy.

My prediction: a year from now we’ll see little change in St Jude’s pacemaker business earners, while there may be some one time costs cleaning some stuff up. This will stop the shenanigans of future 0day+shorting, even when it’s valid, because nobody will believe researchers.

Notes on the Apple/NSO Trident 0days

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/notes-on-applenso-trident-0days.html

I thought I’d write up some comments on today’s news of the NSO malware using 0days to infect human rights activist phones. For full reference, you want to read the Citizen’s Lab report and the Lookout report.

Press: it’s news to you, it’s not news to us

I’m seeing breathless news articles appear. I dread the next time that I talk to my mom that she’s going to ask about it (including “were you involved”). I suppose it is new to those outside the cybersec community, but for those of us insiders, it’s not particularly newsworthy. It’s just more government malware going after activists. It’s just one more set of 0days.

I point this out in case press wants to contact for some awesome sounding quote about how exciting/important this is. I’ll have the opposite quote.

Don’t panic: all patches fix 0days

We should pay attention to context: all patches (for iPhone, Windows, etc.) fix 0days that hackers can use to break into devices. Normally these 0days are discovered by the company itself or by outside researchers intending to fix (and not exploit) the problem. What’s different here is that where most 0days are just a theoretical danger, these 0days are an actual danger — currently being exploited by the NSO Group’s products. Thus, there’s maybe a bit more urgency in this patch compared to other patches.

Don’t panic: NSA/Chinese/Russians using secret 0days anyway

It’s almost certain the NSA, the Chinese, and the Russian have similar 0days. That means applying this patch makes you safe from the NSO Group (for a while, until they find new 0days), but it’s unlikely this patch makes you safe from the others.

Of course it’s multiple 0days

Some people are marveling how the attack includes three 0days. That’s been the norm for browser exploits for a decade now. There’s sandboxes and ASLR protections to get through. There’s privilege escalation to get into the kernel. And then there’s persistence. How far you get in solving one or more of these problems with a single 0day depends upon luck.

It’s actually four 0days

While it wasn’t given a CVE number, there was a fourth 0day: the persistence using the JavaScriptCore binary to run a JavaScript text file. The JavaScriptCore program appears to be only a tool for developers and not needed the functioning of the phone. It appears that the iOS 9.3.5 patch disables. While technically, it’s not a coding “bug”, it’s still a design bug. 0days solving the persistence problem (where the malware/implant runs when phone is rebooted) are worth over a hundred thousand dollars all on their own.

That about wraps it up for VEP

VEP is Vulnerability Equities Process that’s supposed to, but doesn’t, manage how the government uses 0days it acquires.

Agitators like the EFF have been fighting against the NSA’s acquisition and use of 0days, as if this makes us all less secure. What today’s incident shows is that acquisition/use of 0days will be widespread around the world, regardless what the NSA does. It’s be nice to get more transparency about what they NSA is doing through the VEP process, but the reality is the EFF is never going to get anything close to what it’s agitating for.

That about wraps is up for Wassenaar

Wassenaar is an internal arms control “treaty”. Left-wing agitators convinced the Wassenaar folks to add 0days and malware to the treaty — with horrific results. There is essentially no difference between bad code and good code, only how it’s used, so the the Wassenaar extensions have essentially outlawed all good code and security research.

Some agitators are convinced Wassenaar can be still be fixed (it can’t). Israel, where NSO Group is based, is not a member of Wassenaar, and thus whatever limitations Wassenaar could come up with would not stop the NSO.

Some have pointed out that Israel frequently adopts Wassenaar rules anyway, but they would then simply transfer the company somewhere else, such as Singapore.

The point is that 0day development is intensely international. There are great 0day researchers throughout the non-Wassenaar countries. It’s not like precision tooling for aluminum cylinders (for nuclear enrichment) that can only be made in an industrialized country. Some of the best 0day researchers come from backwards countries, growing up with only an Internet connection.

BUY THAT MAN AN IPHONE!!!

The victim in this case, Ahmed Mansoor, has apparently been hacked many time, including with HackingTeam’s malware and Finfisher malware — notorious commercial products used by evil government’s to hack into dissident’s computers.

Obviously, he’ll be hacked again. He’s a gold mine for researchers in this area. The NSA, anti-virus companies, Apple jailbreak companies, and the like should be jumping over themselves offering this guy a phone. One way this would work is giving him a new phone every 6 months in exchange for the previous phone to analyze.

Apple, of course, should head the list of companies doing this, proving “activist phones” to activists with their own secret monitoring tools installed so that they can regularly check if some new malware/implant has been installed.

iPhones are still better, suck it Android

Despite the fact that everybody and their mother is buying iPhone 0days to hack phones, it’s still the most secure phone. Androids are open to any old hacker — iPhone are open only to nation state hackers.

Use signal, use Tor

I didn’t see Signal on the list of apps the malware tapped into. There’s no particular reason for this, other than NSO haven’t gotten around to it yet. But I thought I’d point how yet again, Signal wins.

SMS vs. MitM

Some have pointed to SMS as the exploit vector, which gave Citizen’s Lab the evidence that the phone had been hacked.

It’s a Safari exploit, so getting the user to visit a web page is required. This can be done over SMS, over email, over Twitter, or over any other messaging service the user uses. Presumably, SMS was chosen because users are more paranoid of links in phishing emails than they are SMS messages.

However, the way it should be doing is with man-in-the-middle (MitM) tools in the infrastructure. Such a tool would wait until the victim visited any webpage via Safari, then magically append the exploit to the page. As Snowden showed, this is apparently how the NSA does it, which is probably why they haven’t gotten caught yet after exploiting iPhones for years.

The UAE (the government who is almost certainly trying to hack Mansoor’s phone) has the control over their infrastructure in theory to conduct a hack. We’ve already caught other governments doing similar things (like Tunisia). My guess is they were just lazy, and wanted to do it the easiest way for them.


Bugs don’t come from the Zero-Day Faerie

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/bugs-come-from-zero-day-faerie.html

This WIRED “article” (aka. thinly veiled yellow journalism) demonstrates the essential thing wrong with the 0day debate. Those arguing for NSA disclosure of 0days believe the Zero-Day Faerie brings them, that sometimes when the NSA wakes up in the morning, it finds a new 0day under its pillow.

The article starts with the sentences:

WHEN THE NSA discovers a new method of hacking into a piece of software or hardware, it faces a dilemma. Report the security flaw it exploits to the product’s manufacturer so it gets fixed, or keep that vulnerability secret—what’s known in the security industry as a “zero day”—and use it to hack its targets, gathering valuable intelligence.

But the NSA doesn’t accidentally “discover” 0days — it hunts for them, for the purpose of hacking. The NSA first decides it needs a Cisco 0day to hack terrorists, then spends hundreds of thousands of dollars either researching or buying the 0day. The WIRED article imagines that at this point, late in the decision cycle, that suddenly this dilemma emerges. It doesn’t.

The “dilemma” starts earlier in the decision chain. Is it worth it for the government to spend $100,000 to find and disclose a Cisco 0day? Or is it worth $100,000 for the government to find a Cisco 0day and use it to hack terrorists.

The answers are obviously “no” and “yes”. There is little value of the national interest in spending $100,000 to find a Cisco 0day. There are so many more undiscovered vulnerabilities that this will make little dent in the total number of bugs. Sure, in the long run, “vuln disclosure” makes computers more secure, but a large government investment in vuln disclosure (and bug bounties) will only be a small increase on the total vuln disclosure that happens without government involvement.

Conversely, if it allows the NSA to hack into a terrorist network, a $100,000 is cheap, and an obvious benefit.

My point is this. There are legitimate policy questions about government hacking and use of 0days. At the bare minimum, there should be more transparency. But the premises of activists like Andy Greenburg are insane. NSA 0days aren’t accidentally “discovered”, they don’t come from a magic Zero-Day Faerie. The NSA instead hunts for them, after they’ve come up with a clearly articulated need for one that exceeds mere disclosure.


Credit: @dinodaizovi, among others, has recently tweeted that “discover” is a flawed term that derails the 0day debate, as those like Greenberg assume it means as he describes it in his opening paragraph, that the NSA comes across them accidentally. Dino suggested the word “hunt” instead.

EQGRP tools are post-exploitation

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/eqgrp-tools-are-post-exploitation.html

A recent leak exposed hackings tools from the “Equation Group”, a group likely related to the NSA TAO (the NSA/DoD hacking group). I thought I’d write up some comments.

Despite the existence of 0days, these tools seem to be overwhelmingly post-exploitation. They aren’t the sorts of tools you use to break into a network — but the sorts of tools you use afterwards.

The focus of the tools appear to be about hacking into network equipment, installing implants, achievement permanence, and using the equipment to sniff network traffic.

Different pentesters have different ways of doing things once they’ve gotten inside a network, and this is reflected in their toolkits. Some focus on Windows and getting domain admin control, and have tools like mimikatz. Other’s focus on webapps, and how to install hostile PHP scripts. In this case, these tools reflect a methodology that goes after network equipment.

It’s a good strategy. Finding equipment is easy, and undetectable, just do a traceroute. As long as network equipment isn’t causing problems, sysadmins ignore it, so your implants are unlikely to be detected. Internal network equipment is rarely patched, so old exploits are still likely to work. Some tools appear to target bugs in equipment that are likely older than Equation Group itself.

In particular, because network equipment is at the network center instead of the edges, you can reach out and sniff packets through the equipment. Half the time it’s a feature of the network equipment, so no special implant is needed. Conversely, when on the edge of the network, switches often prevent you from sniffing packets, and even if you exploit the switch (e.g. ARP flood), all you get are nearby machines. Getting critical machines from across the network requires remotely hacking network devices.

So you see a group of pentest-type people (TAO hackers) with a consistent methodology, and toolmakers who develop and refine tools for them. Tool development is a rare thing amount pentesters — they use tools, they don’t develop them. Having programmers on staff dramatically changes the nature of pentesting.

Consider the program xml2pcap. I don’t know what it does, but it looks like similar tools I’ve written in my own pentests. Various network devices will allow you to sniff packets, but produce output in custom formats. Therefore, you need to write a quick-and-dirty tool that converts from that weird format back into the standard pcap format for use with tools like Wireshark. More than once I’ve had to convert HTML/XML output to pcap. Setting port filters for 21 (FTP) and Telnet (23) produces low-bandwidth traffic with high return (admin passwords) within networks — all you need is a script that can convert the packets into standard format to exploit this.

Also consider the tftpd tool in the dump. Many network devices support that protocol for updating firmware and configuration. That’s pretty much all it’s used for. This points to a defensive security strategy for your organization: log all TFTP traffic.

Same applies to SNMP. By the way, SNMP vulnerabilities in network equipment is still low hanging fruit. SNMP stores thousands of configuration parameters and statistics in a big tree, meaning that it has an enormous attack surface. Anything value that’s a settable, variable-length value (OCTECT STRING, OBJECT IDENTIFIER) is something you can play with for buffer-overflows and format string bugs. The Cisco 0day in the toolkit was one example.

Some have pointed out that the code in the tools is crappy, and they make obvious crypto errors (such as using the same initialization vectors). This is nonsense. It’s largely pentesters, not software developers, creating these tools. And they have limited threat models — encryption is to avoid easy detection that they are exfiltrating data, not to prevent somebody from looking at the data.

From that perspective, then, this is fine code, with some effort spent at quality for tools that don’t particularly need it. I’m a professional coder, and my little scripts often suck worse than the code I see here.

Lastly, I don’t think it’s a hack of the NSA themselves. Those people are over-the-top paranoid about opsec. But 95% of the US cyber-industrial-complex is made of up companies, who are much more lax about security than the NSA itself. It’s probably one of those companies that got popped — such as an employee who went to DEFCON and accidentally left his notebook computer open on the hotel WiFi.

Conclusion

Despite the 0days, these appear to be post-exploitation tools. They look like the sort of tools pentesters might develop over years, where each time they pop a target, they do a little development based on the devices they find inside that new network in order to compromise more machines/data.

National interest is exploitation, not disclosure

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/national-interest-is-exploitation-not.html

Most of us agree that more accountability/transparency is needed in how the government/NSA/FBI exploits 0days. However, the EFF’s positions on the topic are often absurd, which prevent our voices from being heard.

One of the EFF’s long time planks is that the government should be disclosing/fixing 0days rather than exploiting them (through the NSA or FBI). As they phrase it in a recent blog post:

as described by White House Cybersecurity Coordinator, Michael Daniel: “[I]n the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest.” Other knowledgeable insiders—from former National Security Council Cybersecurity Directors Ari Schwartz and Rob Knake to President Obama’s hand-picked Review Group on Intelligence and Communications Technologies—have also endorsed clear, public rules favoring disclosure.

The EFF isn’t even paying attention to what the government said. The majority of vulnerabilities are useless to the NSA/FBI. Even powerful bugs like Heartbleed or Shellshock are useless, because they can’t easily be weaponized. They can’t easily be put into a point-and-shoot tool and given to cyberwarriors.

Thus, it’s a tautology saying “majority of cases vulns should be disclosed”. It has no bearing on the minority of bugs the NSA is interested in — the cases where we want more transparency and accountability.

These minority of bugs are not discovered accidentally. Accidental bugs have value to the NSA, so the NSA spends considerable amount of money hunting down different bugs that would be of use, and in many cases, buying useful vulns from 0day sellers. The EFF pretends the political issue is about 0days the NSA happens to come across accidentally — the real political issue is about the ones the NSA spent a lot of money on.

For these bugs, the minority of bugs the NSA sees, we need to ask whether it’s in the national interest to exploit them, or to disclose/fix them. And the answer to this question is clearly in favor of exploitation, not fixing. It’s basic math.

An end-to-end Apple iOS 0day (with sandbox escape and persistance) is worth around $1 million, according to recent bounties from Zerodium and Exodus Intel.

There are two competing national interests with such a bug. The first is whether such a bug should be purchased and used against terrorist iPhones in order to disrupt ISIS. The second is whether such a bug should be purchased and disclosed/fixed, to protect American citizens using iPhones.

Well, for one thing, the threat is asymmetric. As Snowden showed, the NSA has widespread control over network infrastructure, and can therefore insert exploits as part of a man-in-the-middle attack. That makes any browser-bugs, such as the iOS bug above, much more valuable to the NSA. No other intelligence organization, no hacker group, has that level of control over networks, especially within the United States. Non-NSA actors have to instead rely upon the much less reliable “watering hole” and “phishing” methods to hack targets. Thus, this makes the bug of extreme value for exploitation by the NSA, but of little value in fixing to protect Americans.

The NSA buys one bug per version of iOS. It only needs one to hack into terrorist phones. But there are many more bugs. If it were in the national interest to buy iOS 0days, buying just one will have little impact, since many more bugs still lurk waiting to be found. The government would have to buy many bugs to make a significant dent in the risk.

And why is the government helping Apple at the expense of competitors anyway? Why is it securing iOS with its bug-bounty program and not Android? And not Windows? And not Adobe PDF? And not the million other products people use?

The point is that no sane person can argue that it’s worth it for the government to spend $1 million per iOS 0day in order to disclose/fix. If it were in the national interest, we’d already have federal bug bounties of that order, for all sorts of products. Long before the EFF argues that it’s in the national interest that purchased bugs should be disclosed rather than exploited, the EFF needs to first show that it’s in the national interest to have a federal bug bounty program at all.

Conversely, it’s insane to argue it’s not worth $1 million to hack into terrorist iPhones. Assuming the rumors are true, the NSA has been incredibly effective at disrupting terrorist networks, reducing the collateral damage of drone strikes and such. Seriously, I know lots of people in government, and they have stories. Even if you discount the value of taking out terrorists, 0days have been hugely effective at preventing “collateral damage” — i.e. the deaths of innocents.

The NSA/DoD/FBI buying and using 0days is here to stay. Nothing the EFF does or says will ever change that. Given this constant, the only question is how We The People get more visibility into what’s going on, that our representative get more oversight, that the courts have clearer and more consistent rules. I’m the first to stand up and express my worry that the NSA might unleash a worm that takes down the Internet, or the FBI secretly hacks into my home devices. Policy makers need to address these issues, not the nonsense issues promoted by the EFF.

Scanning for ClamAV 0day

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/06/scanning-for-clamav-0day.html

Last week an 0day was released for ClamAV. Well, not really an 0day so much as somebody noticed idiotic features in ClamAV. So I scanned the Internet for the problem.

The feature is that the daemon listens for commands that tell it to do things like scan files. Normally, it listens only locally for such commands, but can be reconfigured to listen remotely on TCP port 3310. Some packages that include ClamAV sometimes default to this.

It’s a simple protocol that consists of sending a command in clear text, like “PING”, “VERSION”, “SHUTDOWN”, or “SCAN
So I ran masscan with the following command:

masscan 0.0.0.0/0 -p3310 –banners –hello-string[3310] VkVSU0lPTg==

Normally when you scan an address range (/0) and port (3310), you’d just see which ports are open/closed. That’s not useful in this case, because it finds 2.7 million machines. Instead, you want to establish a full TCP connection. That’s what the –banners option does, giving us only 38 thousand machines that successfully establish a connection. The remaining machines are large ranges on the Internet where firewalls are configured to respond with SYN-ACK, with the express purpose of frustrating port scanners.

But of those 38k machines, most are actually things like web servers running on odd ports. 51 machines running VNC, 641 machines running SSH, and so on.

To find specifically ClamAV, I send a command using the –hello-string feature. I send the text “VERSION“, which must be encoded with base64 on the command-line for masscan (in case you need to also send binary).

This finds 5950 machines (i.e. 6k) that respond back with a ClamAV signature. typical examples of this response are:

At first I thought the date was when they last updated the software, maybe as a page. Roughly half had dates of either this morning or the day before. But no, it’s actually the dates when they last updated their signatures.

From this we can conclude that roughly half of ClamAV installations are configured to auto-update their signatures.

Roughly 2400 machines (nearly half) had the version 0.97.5. This was released in June 2012 (four years old). I’m thinking some appliance maker like Barracuda bundled the software — appliances are notorious for not getting updated software. That hints at why this non-default configuration is so common — it’s not users who made this decision, but the software that bundles ClamAV with other things. Scanning other ports gives me no clues — they appear all over the map, with different versions of SSH, different services running, different SSL versions, and so on. I thought maybe “mail server” (since that’d be a common task for ClamAV), but there were only a few servers, and they ran different mail server software. So it’s a mystery why this specific version is so popular.

I manually tested various machines with “SCAN foo”. They all replied “file not found”, which hints that all the units I found are vulnerable to this 0day.

As for other things, I came across a bunch of systems claiming to be ChinaDDoS systems:

Conclusion

This sort of stuff shouldn’t exist. The number of ClamAV systems available on the public Internet should be zero.

Even inside a corporate network, the number should be 0. If that stuff is turned on, then it should be firewalled (such as with iptables) so that only specific machines can access it.

Two important results are that half the systems are really old (EOLed, no longer supported), and only half the systems have the latest updates. There’s some overlap — systems with latest signature but out-of-date software.

Defining "Gray Hat"

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/04/defining-gray-hat.html

WIRED has written an article defining “White Hat”, “Black Hat”, and “Grey Hat”. It’s incomplete and partisan.

Black Hats are the bad guys: cybercriminals (like Russian cybercrime gangs), cyberspies (like the Chinese state-sponsored hackers that broke into OPM), or cyberterrorists (ISIS hackers who want to crash the power grid). They may or may not include cybervandals (like some Anonymous activity) that simply defaces websites. Black Hats are those who want to cause damage or profit at the expense of others.

White Hats do the same thing as Black Hats, but are the good guys. The break into networks (as pentesters), but only with permission, when a company/organization hires them to break into their own network. They research the security art, such vulnerabilities, exploits, and viruses. When they find vulnerabilities, they typically work to fix/patch them. (That you frequently have to apply security updates to your computers/devices is primarily due to White Hats). They develop products and tools for use by good guys (even though they sometimes can be used by the bad guys). The movie “Sneakers” refers to a team of White Hat hackers.

Grey Hat is anything that doesn’t fit nicely within these two categories. There are many objective meanings. It can sometimes refer to those who break the law, but who don’t have criminal intent. It can sometimes include the cybervandals, whose activities are more of a prank rather than a serious enterprise. It can refer to “Search Engine Optimizers” who use unsavory methods to trick search engines like Google to rank certain pages higher in search results, to generate advertising profits.

But, it’s also used subjectively, to simply refer to activities the speaker disagrees with. Our community has many debates over proper behavior. Those on one side of a debate frequently use Gray Hat to refer to those on the other side of the debate.

The biggest recent debate is “0day sales to the NSA”, which blew up after Stuxnet, and in particular, after Snowden. This is when experts look for bugs/vulnerabilities, but instead of reporting them to the vendor to be fixed (as White Hats typically do), they sell the bugs to the NSA, so the vulnerabilities (call “0days” in this context) can be used to hack computers in intelligence and military operations. Partisans who don’t like the NSA use “Grey Hat” to refer to those who sell 0days to the NSA.
WIRED’s definition is this partisan definition. Kim Zetter has done more to report on Stuxnet than any other journalist, which is why her definition is so narrow.

But Google is your friend. If you search for “Gray Hat” on Google and set the time range to pre-Stuxnet, then you’ll find no use of the term that corresponds to Kim’s definition, despite the term being in widespread use for more than a decade by that point. Instead, you’ll find things like this EFF “Gray Hat Guide”. You’ll also find how L0pht used the term to describe themselves when selling their password cracking tool called “L0phtcrack”, from back in 1998.

Fast forward to today, activists from the EFF and ACLU call 0day sellers “merchants of death”. But those on the other side of the debate point out how the 0days in Stuxnet saved thousands of lives. The US government had decided to stop Iran’s nuclear program, and 0days gave them a way to do that without bombs, assassinations, or a shooting war. Those who engage in 0day sales do so with the highest professional ethics. If that WaPo article about Gray Hats unlocking the iPhone is true, then it’s almost certain it’s the FBI side of things who leaked the information, because 0day sellers don’t. It’s the government who is full of people who foreswear their oaths for petty reasons, not those who do 0day research.

The point is, the ethics of 0day sales are a hot debate. Using either White Hat or Gray Hat to refer to 0day sellers prejudices that debate. It reflects your own opinion, not that of the listener, who might choose a different word. The definition by WIRED, or the use of “Gray Hat” in the WaPo article, are obviously biased and partisan.

Comments on the FBI success in hacking Farook’s iPhone

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/03/comments-on-fbi-success-in-hacking.html

Left-wing groups like the ACLU and the EFF have put out “official” responses to the news the FBI cracked Farook’s phone without help from the Apple. I thought I’d give a response from a libertarian/technologist angle.First, thank you FBI for diligently trying to protect us from terrorism. No matter how much I oppose you on the “crypto backdoors” policy question, and the constitutional questions brought up in this court case, I still expect you to keep trying to protect us.Likewise, thank you FBI for continuing to be open to alternative means to crack the phone. I suppose you could’ve wrangled things to ignore people coming forward with new information, in order to pursue the precedent, in the longer term policy battle. I disagree with the many people in my Twitter timeline who believe this was some sort of FBI plot — I believe it’s probably just what the FBI says it is: they first had no other solution, then they did.Though, I do wonder if the FBI’s lawyers told them they would likely lose the appeal, thus setting a bad precedent, thus incentivizing the FBI to start looking for an alternative to get out of the case. Whether or not this is actually what happened, I do worry that government has all the power to pursue cases in such a way. It’s like playing poker against an opponent who, when they fold, gets all their chips back.One precedent has been set, though: what it means to “exhaust all other options” therefore justifying the All Writs Act. From one perspective, the FBI was right that no old/existing technique existed to crack the phone. But their demand that only Apple could create a new technique was false. Somebody else obviously could create a new technique. I know a lot of people in the forensics, jailbreak, and 0day communities. They all confirm that the FBI never approached them to see if they could create a new technique. Instead, somebody created a new technique and approached the FBI on their own.The next time the FBI attempts to conscript labor under the All Writs Act, I expect the judge to demand the FBI prove they haven’t tried to hire other engineers. In other words, the judge should ask “Did you contact VUPEN (an 0day firm) or @i0n1c (a jailbreaker) to see if they could create a solution for you?”.Activists like the EFF are now demanding that the FBI make their technique public. This is nonsense. Whoever created the technique obviously wants to keep it secret so that Apple doesn’t patch it in the next iOS release. It’s probable that they gave the FBI Terms and Conditions such that they’d only provide a technique if it were kept secret. The only exception is if this were a forensics company like Cellebrite, which would then want to advertise the capability, to maximize revenue in the short period before Apple closes the hole. The point is, it’s the coder’s rights that are important here. It’s the coder who came up with the jailbreak/0day that gets to decide what to do with it.Is the person/company who approached the FBI with the solution a hero or demon? On one hand, they’ve maintained the status quo, where Apple can continue to try to secure their phones, even against the FBI. On the other hand, they’ve forestalled the courts ruling in our favor, which many would have preferred. I don’t know the answer. Personally, had it been me, I’d’ve offered the exploit/jailbreak to the FBI, but at an exorbitant price they couldn’t afford, because I just don’t like the FBI.Note: I doubt the technique was the NAND mirroring one many have described, or the well known “decapping” procedure that has a 30% of irretrievably destroying the data. Instead, I think it was an 0day or jailbreak. Those two communities are pretty large, and this is well within their abilities.Also note: This is just my best guess, as somebody who does a lot of reverse engineering, coding, and hacking. I have little experience with iPhone in general. I write this blog because people keep asking me, not because I feel this is what everyone else should believe. The only thing I really stand behind here is “coder’s rights”, which is what the ACLU and EFF oppose.The FBI needs to disclose this vulnerability to Apple. Right now. It’s irresponsible and dangerous not to.— Amie Stepanovich (@astepanovich) March 29, 2016DOJ: We’re not giving the iOS 0-day to Apple.Apple: We will continue to help law enforcement in other cases.Way to play hard ball, Apple.— Christopher Soghoian (@csoghoian) March 29, 2016