All posts by Robert Graham

About them Zoom vulns…

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/04/about-them-zoom-vulns.html

Today a couple vulnerabilities were announced in Zoom, the popular work-from-home conferencing app. Hackers can possibly exploit these to do evil things to you, such as steal your password. Because of the COVID-19, these vulns have hit the mainstream media. This means my non-techy friends and relatives have been asking about it. I thought I’d write up a blogpost answering their questions.

The short answer is that you don’t need to worry about it. Unless you do bad things, like using the same password everywhere, it’s unlikely to affect you. You should worry more about wearing pants on your Zoom video conferences in case you forget and stand up.

Now is a good time to remind people to stop using the same password everywhere and to visit https://haveibeenpwned.com to view all the accounts where they’ve had their password stolen. Using the same password everywhere is the #1 vulnerability the average person is exposed to, and is a possible problem here. For critical accounts (Windows login, bank, email), use a different password for each. (Sure, for accounts you don’t care about, use the same password everywhere, I use ‘Foobar1234’). Write these passwords down on paper and put that paper in a secure location. Don’t print them, don’t store them in a file on  your computer. Writing it on a Post-It note taped under your keyboard is adequate security if you trust everyone in your household.

If hackers use this Zoom method to steal your Windows password, then you aren’t in much danger. They can’t log into your computer because it’s almost certainly behind a firewall. And they can’t use the password on your other accounts, because it’s not the same.

Why you shouldn’t worry

The reason you shouldn’t worry about this password stealing problem is because it’s everywhere, not just Zoom. It’s also here in this browser you are using. If you click on file://hackme.robertgraham.com/foo/bar.html, then I can grab your password in exactly the same way as if you clicked on that vulnerable link in Zoom chat. That’s how the Zoom bug works: hackers post these evil links in the chat window during a Zoom conference.

It’s hard to say Zoom has a vulnerability when so many other applications have the same issue.

Many home ISPs block such connections to the Internet, such as Comcast, AT&TCox, Verizon Wireless, and others. If this is the case, when you click on the above link, nothing will happen. Your computer till try to contact hackme.robertgraham.com, and fail. You may be protected from clicking on the above link without doing anything. If your ISP doesn’t block such connections, you can configure your home router to do this. Go into the firewall settings and block “TCP port 445 outbound”. Alternatively, you can configure Windows to only follow such links internal to your home network, but not to the Internet.

If hackers (like me if you click on the above link) gets your password, then they probably can’t use use it. That’s because while your home Internet router allows outbound connections, it (almost always) blocks inbound connections. Thus, if I steal your Windows password, I can’t use it to log into your home computer unless I also break physically into your house. But if I can break into your computer physically, I can hack it without knowing your password.

The same arguments apply to corporate desktops. Corporations should block such outbound connections. They can do this at their gateway firewall. They can also push policy to all the Windows desktops, so that desktops can only log into local file servers instead of remote ones. They should block inbound connections to this protocol. They should consider using two-factor authentication. If they follow standard practices, they have little to worry about.

If your Windows password is the same as your email password, then you have a potential problem. While I can’t use it to hack your Windows desktop computer, I can use it to hack your email account. Or your bank account. Or your Amazon.com account.

What you should do to protect yourself

By far the most important thing you should do to protect yourself from Internet threats is to use a different password for all your important accounts, like your home computer, your email, Amazon.com, and your bank. Write these down on paper (not a file on your computer). Store copies of that paper in a safe place. I put them in a copy of the book Catcher in the Rye on my bookshelf.

Secondly, be suspicious of links. If a friend invites you to a Zoom conference and says “hey, click on this link and tell me what you think”, then be suspicious. It may not actually be your friend, and the link may be hostile. This applies to all links you get, in applications other than Zoom, like your email client. There are many ways links are a threat other than this one technique.

This second point isn’t good advice: these technologies are designed for you to click on links. It’s impossible to be constantly vigilant. Even experts get fooled occasionally. You shouldn’t depend upon this protecting you. It’s like social distancing and the novel coronavirus: it cuts down on the threat, but doesn’t come close to eliminating it.

Make sure you block outbound port 445. You can configure Windows to do this, your home router, and of course, your ISP may be doing this for you.

Consider using two-factor authentication (such as SMS messages to your mobile phone) or password managers. Increasingly websites don’t manage username/passwords themselves, but instead use Google, Facebook, or Twitter accounts as the login. Pick those in preference to creating a new password protected account. Of course, this means if somebody tricks you to reveal your Google/Facebook/Twitter password you are in trouble, but you can use two-factor authentication for those accounts to make that less likely.

Why this hack works

You are familiar with web addresses like https://google.com/search?q=foobar. The first part of this address, https:// says that it’s a “secure hypertext protocol” address.

Other addresses are possible. One such address is file:// as in the example above. This tells the computer to Microsoft Windows “file server” protocol. This protocol is used within corporate networks, where desktops connect to file servers within the corporate network. When clicking on such a link, your computer will automatically send your username and encrypted password (sic) to log into the file server.

The internal corporate network is just a subset of the entire Internet. Thus, instead of naming servers local to the corporate network, the links can refer to remote Internet servers.

Nobody asks you for your password when you click on such links, either in this webpage, an email, or in Zoom chat. Instead, Windows is supplying the encrypted password you entered when you logged onto your desktop.

The hacker is only stealing the encrypted form of the password, not the original password. Therefore, their next step is to crack the password. This means guessing zillions of possible passwords, encrypting them, and seeing if there’s match. They can do this at rates of billions of guesses per second using specialized hardware and software on their own computers.

That means weak passwords like “Broncos2016” will get cracked in less than a second. But strong passwords like “pUqyQAM6GzWpWEyg” have trillions times a trillion combinations, so that they can’t be guessed/cracked in a billion years, even by the NSA. Don’t take this to mean that you need a “strong password” everywhere. This becomes very difficult to manage. Instead, people choose to use password managers or two-factor authentication or other techniques.

Note that on Windows, if the prefix is missing, it is assumed to be “file:”, so the links may appear as //hackme.robertgraham.com/foo/bar.html or \\hackme.robertgraham.com\foo\bar.html.

Is this overhyped?

Lots of people are criticizing this story as being overhyped. I’m not sure it is. It’s one of those stories that merits publication, yet at the same time, not the widespread coverage for the mainstream. It’s spread further than it normally would have because of all the attention on the pandemic and work-from-home.

I don’t know if Zoom will “fix” this bug. It’s a useful feature on corporate conferences, to point to files on corporate servers. It’s up to companies (and individuals) to protect themselves generally against this threat, because it appears in a wide variety of applications, not just Zoom.

What about the other vuln?

Two vulns were announced. The one that’s gathered everyone’s attention is the “stealing passwords” one. The other vuln is even less dangerous. It allows somebody with access to a Mac to use the Zoom app to gain control over the computer. But if somebody had that much control over your Mac, then they can do other bad things to it.

Summary

In response to this news story, the thing you need to worry about is wearing pants, or making sure other household members wear pants. You never know when the Zoom videoconferencing camera accidentally catches somebody in the wrong pose. Unless you are extremely paranoid, I don’t think you need to worry about this issue in particular.

Huawei backdoors explanation, explained

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/03/huawei-backdoors-explanation-explained.html

Today Huawei published a video explaining the concept of “backdoors” in telco equipment. Many are criticizing the video for being tone deaf. I don’t understand this concept of “tone deafness”. Instead, I want to explore the facts.


This video seems in response to last month’s story about Huawei misusing law enforcement backdoors from the Wall Street Journal. All telco equipment has backdoors usable only by law enforcement, the accusation is that Huawei has a backdoor into this backdoor, so that Chinese intelligence can use it.

That story was bogus. Sure, Huawei is probably guilty of providing backdoor access to the Chinese government, but something is deeply flawed with this particular story.

We know something is wrong with the story because the U.S. officials cited are anonymous. We don’t know who they are or what position they have in the government. If everything they said was true, they wouldn’t insist on being anonymous, but would stand up and declare it in a press conference so that every newspaper could report it. When something is not true or spun, then they anonymously “leak” it to a corrupt journalist to report it their way.

This is objectively bad journalism. The Society of Professional Journalists calls this the “Washington Game“. They also discuss this on their Code of Ethics page. Yes, it’s really common in Washington D.C. reporting, you see it all the time, especially with the NYTimes, Wall Street Journal, and Washington Post. But it happens because what the government says is news, regardless of its false or propaganda, giving government officials the ability to influence journalists. Exclusive access to corrupt journalists is how they influence stories.

We know the reporter is being especially shady because of the one quote in the story that is attributed to a named official:

“We have evidence that Huawei has the capability secretly to access sensitive and personal information in systems it maintains and sells around the world,” said national security adviser Robert O’Brien.

This quote is deceptive because O’Brien doesn’t say any of the things that readers assume he’s saying. He doesn’t actually confirm any of the allegations in the rest of the story.

It doesn’t say.

  • That Huawei has used that capability.
  • That Huawei intentionally put that capability there.
  • That this is special to Huawei (rather than everywhere in the industry).

In fact, this quote applies to every telco equipment maker. They all have law enforcement backdoors. These backdoors always hve “controls” to prevent them from being misused. But these controls are always flawed, either in design or how they are used in the real world.

Moreover, all telcos have maintenance/service contracts with the equipment makers. When there are ways around such controls, even unintentional ones, it’s the company’s own support engineers who will best know them.

I absolutely believe Huawei that it has done at least as much as any vendor to prevent backdoor access to it’s equipment.

At the same time, I also know that Huawei’s maintenance/service abilities have been used for intelligence. Several years ago there was an international incident. My company happened to be doing work with the local mobile company at the time. We watched as a Huawei service engineer logged in using their normal service credentials and queried the VLR databases for all the mobile devices connected to the cell towers nearest the incident in the time in question. After they executed the query, they erased the evidence from the log files.

Maybe this was just a support engineer who was curious. Maybe it was Chinese intelligence. Or, maybe it was the NSA. Seriously, if I were head of the NSA, I’d make it a priority to hack into Huawei’s support departments (or bribe their support engineers) in order to get this sort of backdoor access around the world.

Thus, while I believe Huawei has done as much as any other vendor to close backdoors, I also know that in at least one case where they have abused backdoors.

Now let’s talk about the contents of the video. It classifies “backdoors” in three ways:

  • law-enforcement “front doors”
  • service/maintenance access
  • malicious backdoors

I think their first point is to signal to the FBI that they are on law-enforcement’s side in the crypto-backdoor’s debate. The FBI takes the same twisted definition, that law-enforcement backdoors aren’t backdoors, but front-doors.

It’s still a backdoor, even if it’s for law-enforcement. It’s not in the interests of the caller/callee to be eavesdropped. Thus, from their point of view, the eavesdropping is “malicious”, even if it’s in the interests of society as a whole.

I mention this because this should demonstrate how Huawei’s adoption of the law enforcement point of view backfires. What happens when Chinese intelligence comes to Huawei and demands access in a manner that is clearly legal under Chinese law. By accepting that all law-enforcement demands are legitimate, it means all Chinese government demands are legitimate.

Huawei may be no worse than any other company, but China is worse than free democracies. What is legitimate law-enforcement demands in their country are intolerable in free countries. We’ve had six months of protests in Hong Kong over that issue.

In other words, Huawei is saying they don’t have backdoors because, in fact, they are front-doors for the Chinese government.

In conclusion, I don’t find that Huawei video to be “tone deaf”. Huawei has good reason to believe it’s being unfairly portrayed in “fake news” articles, like the WSJ article I cited above. At the same time, the threat posed by Huawei for Chinese spying is real.

A requirements spec for voting

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/03/a-requirements-spec-for-voting.html

In software development, we start with a “requirements specification” defining what the software is supposed to do. Voting machine security is often in the news, with suspicion the Russians are trying to subvert our elections. Would blockchain or mobile phone voting work? I don’t know. These things have tradeoffs that may or may not work, depending upon what the requirements are. I haven’t seen the requirements written down anywhere. So I thought I’d write some.

One requirement is that the results of an election must seem legitimate. That’s why responsible candidates have a “concession speech” when they lose. When John McCain lost the election to Barack Obama, he started his speech with:

“My friends, we have come to the end of a long journey. The American people have spoken, and they have spoken clearly. A little while ago, I had the honor of calling Sen. Barack Obama — to congratulate him on being elected the next president of the country that we both love.”

This was important. Many of his supporters were pointing out irregularities in various states, wanting to continue the fight. But there are always irregularities, or things that look like irregularities. In every election, if a candidate really wanted to, they could drag out an election indefinitely investigating these irregularities. Responsible candidates therefore concede with such speeches, telling their supporters to stop fighting.

It’s one of the problematic things in our current election system. Even before his likely loss to Hillary, Trump was already stirring up his voters to continue to the fight after the election. He actually won that election, so the fight never occurred, but it was likely to occur. It’s hard to imagine Trump ever conceding a fairly won election. I hate to single out Trump here (though he deserves criticism on this issue) because it seems these days both sides are convinced now that the other side is cheating.

The goal of adversaries like Putin’s Russia isn’t necessarily to get favored candidates elected, but to delegitimize the candidates who do get elected. As long as the opponents of the winner believe they have been cheated, then Russia wins.

Is the actual requirement of election security that the elections are actually secure? Or is the requirement instead that they appear secure? After all, when two candidates have nearly 50% of the real vote, then it doesn’t really matter which one has mathematical legitimacy. It matters more which has political legitimacy.

Another requirement is that the rules be fixed ahead of time. This was the big problem in the Florida recounts in the 2000 Bush election. Votes had ambiguities, like hanging chad. The legislature come up with rules how to resolve the ambiguities, how to count the votes, after the votes had been cast. Naturally, the party in power who comes up with the rules will choose those that favor the party.

The state of Georgia recently pass a law on election systems. Computer scientists in election security criticized the law because it didn’t have their favorite approach, voter verifiable paper ballots. Instead, the ballot printed a bar code.

But the bigger problem with the law is that it left open what happens if tampering is discovered. If an audit of the paper ballots finds discrepancies, what happens then? The answer is the legislature comes up with more rules. You don’t need to secretly tamper with votes, you can instead do so publicly, so that everyone knows the vote was tampered with. This then throws the problem to the state legislature to decide the victor.

Even the most perfectly secured voting system proposed by academics doesn’t solve the problem. It’ll detect voter tampering, but doesn’t resolve when tampering is detected. What do you do with tampered votes? If you throw them out, it means one candidate wins. If you somehow fix them, it means the other candidate wins. Or, you try to rerun the election, in which case a third candidate wins.

Usability is a requirement. A large part of the population cannot read simple directions. By this I don’t mean “the dumb people”, I mean everyone who has struggled to assemble Ikea furniture or a child’s toy.

That’s one of the purposes of voting machines: to help people who would otherwise be confused by paper ballots. It’s why the massive move to electronic machines after the Bush 2000 Florida election, because they were more usable (less confusing).

This has long been a struggle in cybersecurity, as “secure” wars against “usable”. A secure solution that confuses 10% of the population is not a solution. A solution that the entire population find is easy, but which has security flaws, is still preferable.

Election security isn’t purely about the ballot on election day. It includes the process months or years before hand, such as registering the voters or devising what goes into the ballot. It includes the process afterwards when counting/tabulating the votes.

A perfectly secure ballot therefore doesn’t mean a secure election.

Much of the suspected Russian hacking actually involved the voter registration rolls. Tampering with those lists, adding supporters (or fake people) to your own side, or removing supporters of the opponents side, can swing the election.

This leads to one of the biggest problems: voter turnout and disenfranchisement, preventing people from voting. Perfect election security doesn’t solve this.

It’s hard to measure exactly how big the problem is. Both sides are convinced the other side is disenfranchising their own voters. In many cases, it’s a conspiracy theory.

But we do know that voter turnout in the United States is abysmally low versus other countries. In U.S. presidential elections, roughly 50% of eligible voters vote. In other democracies, the percentage is closer to 90%.

This distorts the elections toward extremes. As a candidate, you can choose either the “moderate” position, trying to win some votes from the other side, or you can choose the “extreme” positions, hoping to excite voters to get out and actually vote. Getting 10% more of your voters in the voting booths is better than luring 5% from the other side.

One solution proposed by many is to make election day a national holiday, so that voters don’t have to choose between voting and work. Obviously, this would mean voting on Wednesdays — they may be willing to skip work to vote, but not a three day vacation if voting day were Mondays.

Voting apps on mobile phones have horrible security problems that make us cybersecurity types run screaming away from the solution. On the other hand, mobile phones have the best chance of solving this participation issue, increasing turnout from 50% to 90%. Is cybersecurity risks acceptable if it has such dramatic improvements in participation rates? Conversely, can we describe any system as broken that fails to achieve such rates? Is 90% participation one of our “requirements” that we are failing to meet?

By the way, by “90%” I mean “of people 18 or over”, not “eligible voters”. Obviously, you can improve participation among eligible voters by playing with who is eligible. Many states forbid convicted felons from voting, which sounds like a good idea on its surface, but which is problematic in a democracy that jails 10 times more of its population than other democracies. Whatever legitimate reasons for removing eligibility has to therefore fit within that 90% number.

The best way we have to make voting secure is to make it completely transparent, so that everybody knows how you voted. This is obviously not a viable strategy, because of course that then allows people to coerce/bribe you into voting a certain way. So we want anonymity.

But is perfect anonymity necessary? Many voters don’t care if their vote is public, and indeed, want to proclaim very publicly who they voted for.

Imagine that to secure the mobile voting app, it randomly chooses 1% of votes and makes them public. It would be a risk voters would accept when using the app versus some other voting mechanism.

I mean this as a thought experiment. I choose 1% random selection because it prevent obvious coercion and bribery. But this naive implementation still has flaws. More work needs to be done to stop coercion, and you have to secure the system from hackers who only reveal the votes they haven’t tampered with. But I work with a lot of cryptographic protocols that are able to preserve things in strange ways, so while a naive protocol may be flawed, I’m not sure all are.

In other words, the requirement of the system is not that votes are anonymous, but that votes cannot be coerced or bribed. This is a common problem in software development: the requirements aren’t the actual requirements, but written in a way prejudicial toward a preferred solution. This excludes viable solutions.

This blogpost is about questions not answers. As a software developer, I know that we start with listing the requirements the system is designed to solve. I want to know what those requirements are for “voting in a democracy”. I’m unsuccessful googling for such a list; what I do find fails to include the above ideas, for example. I know that blockchain is a stupid answer to most any question, but on the other hand, I don’t know exactly what this question is, so how can I explicitly call blockchain a stupid solution? Mobile devices have laughable security for voting, but at the same time, our voting has major problems they would solve, so I can’t rule them out either.


This comment makes a good point, as was demonstrated by the DNS Iowa caucuses:

Software development normally works in iterative steps, whereas it has to work right during voting. You can’t expect to patch the app halfway through the voting day.

There’s no evidence the Saudis hacked Jeff Bezos’s iPhone

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/01/theres-no-evidence-saudis-hacked-jeff.html

There’s no evidence the Saudis hacked Jeff Bezos’s iPhone.

This is the conclusion of the all the independent experts who have reviewed the public report behind the U.N.’s accusations. That report failed to find evidence proving the theory, but instead simply found unknown things it couldn’t explain, which it pretended was evidence.

This is a common flaw in such forensics reports. When there’s evidence, it’s usually found and reported. When there’s no evidence, investigators keep looking. Todays devices are complex, so if you keep looking, you always find anomalies you can’t explain. There’s only two results from such investigations: proof of bad things or anomalies that suggest bad things. There’s never any proof that no bad things exist (at least, not in my experience).

Bizarre and inexplicable behavior doesn’t mean a hacker attack. Engineers trying to debug problems, and support technicians helping customers, find such behavior all the time. Pretty much every user of technology experiences this. Paranoid users often think there’s a conspiracy against them when electronics behave strangely, but “behaving strangely” is perfectly normal.

When you start with the theory that hackers are involved, then you have an explanation for the all that’s unexplainable. It’s all consistent with the theory, thus proving it. This is called “confirmation bias”. It’s the same thing that props up conspiracy theories like UFOs: space aliens can do anything, thus, anything unexplainable is proof of space aliens. Alternate explanations, like skunkworks testing a new jet, never seem as plausible.

The investigators were hired to confirm bias. Their job wasn’t to do an unbiased investigation of the phone, but instead, to find evidence confirming the suspicion that the Saudis hacked Bezos.

Remember the story started in February of 2019 when the National Inquirer tried to extort Jeff Bezos with sexts between him and his paramour Lauren Sanchez. Bezos immediately accused the Saudis of being involved. Even after it was revealed that the sexts came from Michael Sanchez, the paramour’s brother, Bezos’s team double-downed on their accusations the Saudi’s hacked Bezos’s phone.

The FTI report tells a story beginning with Saudi Crown Prince sending Bezos a message using WhatsApp containing a video. The story goes:

The downloader that delivered the 4.22MB video was encrypted, delaying or preventing further study of the code delivered along with the video. It should be noted that the encrypted WhatsApp file sent from MBS’ account was slightly larger than the video itself.

This story is invalid. Such messages use end-to-end encryption, which means that while nobody in between can decrypt them (not even WhatsApp), anybody with possession of the ends can. That’s how the technology is supposed to work. If Bezos loses/breaks his phone and needs to restore a backup onto a new phone, the backup needs to have the keys used to decrypt the WhatsApp messages.

Thus, the forensics image taken by the investigators had the necessary keys to decrypt the video — the investigators simply didn’t know about them. In a previous blogpost I explain these magical WhatsApp keys and where to find them so that anybody, even you at home, can forensics their own iPhone, retrieve these keys, and decrypt their own videos.

The above story implicates the encrypted file because it’s slightly larger than than the unencrypted file. One possible explanation is that these extra bytes contain an exploit, virus, or malware.

However, there’s a more accurate explanation: all encrypted WhatsApp videos will be larger than the unencrypted versions by between 10 and 25 bytes, for verification and padding. It’s a standard way how encryption works.

This is a great demonstration of confirmation bias in action, how dragons breed on the edge of maps. When you expect the encrypted and unencrypted versions to be the same size, this anomaly is inexplicable and suggestive of hacker activity. When you know how the encryption works, how there’s always an extra 10 to 25 bytes, then the idea is silly.

It’s important to recognize how much the story hinges on this one fact. They have the unencrypted video and it’s completely innocent. We have the technology to exonerate that video, and it’s exonerated. Thus, if a hack occurred, it must be hidden behind the encryption. But when we unmask the encryption and find only the video we already have, then the entire report will break down. There will no longer be a link between any hack found on the phone and the Saudis.

But even if there isn’t a link to the Saudis, there may still be evidence the phone was hacked. The story from the FTI forensics report continues:

We know from a comprehensive examination of forensics artifacts on Bezos’ phone that within hours of the encrypted downloader being received, a massive and unauthorized exfiltration of data from Bezos’ phone began, continuing and escalating for months thereafter. … The amount of data being transmitted out of Bezos’ phone changed dramatically after receiving the WhatsApp video file and never returned to baseline. Following execution of the encrypted downloader sent from MBS’ account, egress on the device immediately jumped by approximately 29,000 percent.

I’ve performed the same sort of forensics on my phones and have found that there no such thing as some sort of normal “baseline” of traffic, as described in this Twitter thread. One reason is that users do unexpected things, like forward an email that has a large attachment, or visiting a website that causes unexpectedly high amounts of traffic. Another reason is that the traffic isn’t stored in nice hourly or daily buckets as the above story implies. Instead, when you use the app for a months, you get just a single record of how much data the app has sent for months. For example, I see one day where the Uber app exfiltrated 56-megabytes of data from my phone, which seems an inexplicable anomaly. However, that’s just the date the record is recorded, reflecting months of activity as Uber has run in the background on my phone.

I can’t explain all the bizarre stuff I see on my phone. I only ever download podcasts, but the records show the app uploaded 150-megabytes. Even when running over months, this is excessive. But lack of explanation doesn’t mean this is evidence of hacker activity trying to hide traffic inside the podcast app. It just means something odd is going on, probably a bug or inefficient design, that a support engineer might want to know about in order to fix.

Conclusion

Further FTI investigation might find more evidence that actually shows a hack or Saudi guilt, but the current report should be considered debunked. It contains no evidence, only things it’s twisted to create the impression of evidence.

Bezos’s phone may have been hacked. The Saudis may be responsible. They certainly have the means, motive, and opportunity to do so. There’s no evidence exonerating the Saudis as a whole.

But there is evidence that will either prove Saudi culpability or exonerate that one video, the video upon which the entire FTI report hinges. And we know that video will likely be exonerated simply because that’s how technology works.

The entire story hinges on that one video. If debunked, the house of cards fall down, at least until new evidence is found.

The mainstream press has done a crapy job. It’s a single-sourced story starting with “experts say”. But it’s not many experts, just the FTI team. And they aren’t unbiased experts, but those hired specifically to prove Besos’s accusation against the Saudis. Rather than healthy skepticism looking for other experts to dispute the story, the press has jumped in taking Bezos’s side in the dispute.

I am an expert, and as I’ve shown in this blogpost (and linked posts with technical details), I can absolutely confirm the FTI report is complete bunk. It contains no evidence of a hack, just anomalies it pretends are evidence.

How to decrypt WhatsApp end-to-end media files

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/01/how-to-decrypt-whatsapp-end-to-end.html

At the center of the “Saudis hacked Bezos” story is a mysterious video file investigators couldn’t decrypt, sent by Saudi Crown Prince MBS to Bezos via WhatsApp. In this blog post, I show how to decrypt it. Once decrypted, we’ll either have a smoking gun proving the Saudi’s guilt, or exoneration showing that nothing in the report implicated the Saudis. I show how everyone can replicate this on their own iPhones.

The steps are simple:

  • backup the phone to your computer (macOS or Windows), using one of many freely available tools, such as Apple’s own iTunes app
  • extract the database containing WhatsApp messages from that backup, using one of many freely available tools, or just hunt for the specific file yourself
  • grab the .enc file and decryption key from that database, using one of many freely available SQL tools
  • decrypt the video, using a tool I just created on GitHub
End-to-end encrypted downloader

The FTI report says that within hours of receiving a suspicious video that Bezos’s iPhone began behaving strangely. The report says:

…analysis revealed that the suspect video had been delivered via an encrypted downloader host on WhatsApp’s media server. Due to WhatsApp’s end-to-end encryption, the contents of the downloader cannot be practically determined. 

The phrase “encrypted downloader” is not a technical term but something the investigators invented. It sounds like a term we use in malware/viruses, where a first stage downloads later stages using encryption. But that’s not what happened here.
Instead, the file in question is simply the video itself, encrypted, with a few extra bytes due to encryption overhead (10 bytes of checksum at the start, up to 15 bytes of padding at the end).

Now let’s talk about “end-to-end encryption”. This only means that those in middle can’t decrypt the file, not even WhatsApp’s servers. But those on the ends can — and that’s what we have here, one of the ends. Bezos can upgrade his old iPhone X to a new iPhone XS by backing up the old phone and restoring onto the new phone and still decrypt the video. That means the decryption key is somewhere in the backup.

Specifically, the decryption key is in the file named 7c7fba66680ef796b916b067077cc246adacf01d in the backup, in the table named ZWAMDIAITEM, as the first protobuf field in the field named ZMEDIAKEY. These details are explained below.

WhatsApp end-to-end encryption of video

Let’s discuss how videos are transmitted using text messages.
We’ll start with SMS, the old messaging system built into the phone system that predates modern apps. It can only send short text messages of a few hundred bytes at a time. These messages are too small to hold a complete video many megabytes in size. They are sent through the phone system itself, not via the Internet.
When you send a video via SMS what happens is that the video is uploaded to the phone company’s servers via HTTP. Then, a text message is sent with a URL link to the video. When the recipient gets the message, their phone downloads the video from the URL. The text messages going through the phone system just contain the URL, an Internet connection is used to transfer the video.
This happens transparently to the user. The user just sees the video and not the URL. They’ll only notice a difference when using ancient 2G mobile phones that can get the SMS messages but which can’t actually connect to the Internet.
A similar thing happens with WhatsApp, only with encryption added.

The sender first encrypts the video, with a randomly generated key, before uploading via HTTP to WhatsApp’s servers. This means that WhatsApp can’t decrypt the files on their servers.

The sender then sends a message containing the URL and the decryption key to the recipient. This message is encrypted end-to-end, so again, WhatsApp itself cannot decrypt the contents of the message.

The recipient downloads the video from WhatsApp’s server, then decrypts it with the encryption key.

Here’s an example. A friend sent me a video via WhatsApp:

All the messages are sent using end-to-end encryption for this session. As described above, the video itself is not sent as a message, only the URL and a key. These are:
mediakey = TKgNZsaEAvtTzNEgfDqd5UAdmnBNUcJtN7mxMKunAPw=
These are the real values from the above exchange. You can click on the URL and download the encrypted file to your own computer. The file is 22,161,850 bytes (22-megabytes) in size. You can then decrypt it using the above key, using the code shown below. I can’t stress this enough: you can replicate everything I’m doing in this blogpost, to do the things the original forensics investigators hired by Bezos could not.
iPhone backups and file extraction
The forensics report in the Bezos story mentions lots of fancy, expensive tools available only to law enforcement, like Celebrite. However, none these appear necessary to produce their results. It appears you can get the same same results at home using freely available tools.
There are two ways of grabbing all the files from an iPhone. One way is just to do a standard backup of the phone, to iCloud or to a desktop/laptop computer. A better way is to jailbreak the phone and get a complete image of the internal drive. You can do this on an iPhone X (like Bezos’s phone) using the ‘checkm8’ jailbreak. It’s a little complicated, but well within the abilities of techies. A backup gets only the essential files needed to restoring the phone, but a jailbreak gets everything.
In this case, it appears the investigators only got a backup of the phone. For the purposes of decrypting WhatsApp files, it’s enough. As mentioned above, the backup needs these keys in order to properly restore a phone.

You can do this using Apple’s own iTunes program on Windows or macOS. This copies everything off the iPhone onto your computer. The intended purpose is so that if you break your phone, lose it, or upgrade to the latest model, you can easily restore from this backup. However, we are going to use this backup for forensics instead (we have no intention of restoring a phone from this backup).

So now that you’ve copied all the files to your computer, where are they, what are they, and what can you do with them?
Here’s the location of the files. There’s two different locations for Windows, depending upon whether you installed iTunes from Apple or Microsoft.
  • macOS: /Users/username/Library/Application Support/MobileSync/Backup
  • Windows: /Users/username/AppData/Roaming/Apple Computer/MobileSync/Backup
  • Windows: /Users/username/Apple/MobileSync/Backup
The backup for a phone is stored using the unique ID of the phone, the UDID:
Inside the backup directory, Apple doesn’t use the original filenames on the phone. Instead, it stores them using the SHA1 hash of the original filename. The backup directory has 256 subdirectories named 00, 01, 02, …. ff corresponding to the first byte of the hash, each directory containing the corresponding files.

The file we are after is WhatsApp’s ChatStorage.sqlite file, whose full pathname on the iPhone hashes to “7c7fba66680ef796b916b067077cc246adacf01d“.
On macOS, the Backup directory is protected. You have to go into the Security and Privacy settings to give the Terminal app “Full Disk Access” permissions. Then, copy this file to some other directory (like ~) where other apps can get at it.
Note that in the screenshot above, I also gave “iPhone Backup Extractor” permissions. This program provides a GUI that gives files their original names (like “ChatStorage.sqlite”) instead of hashes 7c7fba666… It also has a bunch of built-in logic for extracting things like photos and text messages.
The point of this section is to show that getting these files is simply a matter of copying off your phone and knowing which file to look for.
Working with WhatsApp chat log
In the previous section, I describe how to backup the iPhone, and then retrieve the file ChatStorage.sqlite from that backup. This file contains all your chat messages sent and received on your iPhone. In this section, I describe how to read that file.
This file is an SQL database in standard “sqlite” format. This is a popular open-source projects for embedding SQL databases within apps and it’s used everywhere. This means that you can use hundreds of GUIs, command-line tools, and programming languages to read this file.
I use “sqlitebrowser“, which runs as a GUI on Windows, macOS, and Linux. Below is a screenshot. As you can see, the filename is the file we copied in the step above, the hash of the original name. I then click on “Browse Data” and select the table ZWAMEDIAITEM. I see a list of those URLs in the column ZMEDIAURL, and the corresponding decryption keys in the column ZMEDIAKEY.

The media keys are “blobs” — “binary large objects”. If I click on one of those blobs I see the following as the mediakey:

This binary data is in a format called protobuf. The byte 0x0a means the first field is a variable length string. The next byte 0x20 means the string is 32-bytes long. The next 32-bytes is our encryption key, which I’ve highlighted. The next field (0x12 0x20) is a hash of the file. There are two more fields at the end, but I don’t understand what they are.
So in hex, our encryption key is:
4ca80d66c68402fb53ccd1207c3a9de5401d9a704d51c26d37b9b130aba700fc
Or if encoded in BASE64;
TKgNZsaEAvtTzNEgfDqd5UAdmnBNUcJtN7mxMKunAPw=
We now have the mediaurl and mediakey mentioned above. All we need to do is download the file and decrypt it.
How to decrypt a WhatsApp media file
Now we come to the meat of this blogpost: given a URL and a key, how do we decrypt it? The answer is “unsurprising crypto”. It’s one of most important principles of cryptography that whatever you do should be something boring as normal, as is the case here. If the crypto is surprising and interesting, it’s probably wrong.

Thus, the only question is which of the many standard ways did WhatsApp choose?
Firstly, they chose AES-256, which is the most popular choice for such things these days. It’s key is 256-bits, or 32-bytes. AES is a “block cipher”, which means it encrypts a block at a time. The block size is 16-bytes. When the final block of data is less than 16-bytes, it needs to be padded out to the full length.

But that’s not complete. In modern times we’ve come to realize that simple encryption like this is not enough. A good demonstration of this is the famous “ECB penguin” [1] [2] [3]. If two 16-byte blocks in the input have the same cleartext data, they’ll have the same encrypted data. This is bad, as it allows much to be deduced/reverse-engineered from the encrypted contents even if those contents can’t be decrypted.
Therefore, WhatsApp needs not only an encryption algorithm but also a mode to solve this problem. They chose CBC or “cipher block chaining”, which as the name implies, chains all the blocks together. This is also a common solution.
CBC mode solves the ECB penguin problem of two blocks encrypting the same way, but it still has the problem of two files encrypting the same way, when the first part of the files are the same. Everything up to the first difference will encrypt the same, after which they will be completely different.
This is fixed by adding what’s called an initialization vector or nonce to the start of the file, some random data that’s different for each file. This guarantees that even if you encrypt the same file twice with the same key, the encrypted data will still be completely different, unrelated. The IV/nonce is stripped out when the file is decrypted.
Finally, there is the problem with the encrypted file may be corrupted in transit — accidentally or maliciously. You need to check this with a hash or message authentication code (aka MAC). In the case of WhatsApp, this will be in the first 10 bytes of the encrypted data, which we’ll have to strip out at the end. This MAC is generated by using a different key than the AES key. In other words, we need two keys: one to encrypt the file, and a second to verify that the contents haven’t been changed.
This explains why there was a 14 byte difference between the encrypted video and unencrypted video. The encrypted data needed 10 bytes for a MAC at the start, and 4 bytes for padding at the end.
The code

Here is the code that implements all the above stuff:
At the top of the file I’ve hard-coded the values for the mediaurl and mediakey to the ones I found above in my iPhone backup.
The mediakey is only 32-bytes, but we need more. We need 32-bytes for the AES-256 key, another 16-bytes for the initialization vector, and 32-bytes for the message authentication key.
This is common problem is solved by using a special pseudo-randomization function to expand a small amount of data into a larger amount of data, in this case from 32-bytes to 112-bytes. The standard WhatsApp chose is “HMAC Key Derivation Function“. This is expressed in my code as the following, where I expand the key into the IV, cipherkey, and mackey.:
mediaKeyExpanded=HKDF(base64.b64decode(mediaK),112,salt)
iv=mediaKeyExpanded[:16]
cipherKey= mediaKeyExpanded[16:48]
macKey=mediaKeyExpanded[48:80]
Then, I download the file from the URL. I have to strip the first 10 bytes from the file, which is the message authentication code.
mediaData= urllib2.urlopen(mediaurl).read()
file= mediaData[:-10]
mac= mediaData[-10:]
Then using the cipherkey from the first step, I decrypt the file. I have to strip the padding at the end of the file.
decryptor = AES.new(cipherKey, AES.MODE_CBC, iv)
imgdata=AESUnpad(decryptor.decrypt(file))
To download and decrypt the video, simply run the program as such:
I’m not going to link to the video myself. If you want to know what it contains, you are going to have to run the program yourself.

Remember that this example is a video a friend sent to me, and not the original video sent by MBS to Bezos. But the same principle applies. Simply look in that file in the backup, extract the URL and mediakey, insert into this program, and you’ll get that file decrypted.

Conclusion

The report from FTI doesn’t find evidence. Instead, it finds the unknown. It can’t decrypt the .enc file from WhatsApp. It therefore concludes that it must contains some sort of evil malware hidden on that that encryption — encryption which they can’t break.

But this is nonsense. They can easily decrypt the file, and prove conclusively whether it contains malware or exploits.

They are reluctant to do this because then their entire report would fall apart. Their conclusion is based upon Bezos’s phone acting strange after receiving that video. If that video is decrypted and shown not to contain a hack of some sort, then the rest of the reasoning is invalid. Even if they find other evidence that Bezos’s phone was hacked, there would no longer anything linking to the Saudis.


So that tweet was misunderstood

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/12/when-tweets-are-taken-out-of-context.html

I’m currently experiencing the toxic hell that is a misunderstood tweet going viral. It’s a property of the social media. The more they can deliberately misunderstand you, the more they can justify the toxicity of their response. Unfortunately, I had to delete it in order to stop all the toxic crud and threats of violence.

The context is how politicians distort everything. It’s like whenever they talk about sea level rise, it’s always about some city like Miami or New Orleans that is sinking into the ocean already, even without global warming’s help. Pointing this out isn’t a denial of global warming, it’s pointing out how we can’t talk about the issue without exaggeration. Mankind’s carbon emissions are indeed causing sea level to rise, but we should be talking about how this affects average cities, not dramatizing the issue with the worst cases.

The same it true of health care. It’s a flawed system that needs change. But we don’t discuss the people making the best of all bad choices. Instead, we cherry pick those who made the worst possible choice, and then blame the entire bad outcome on the system.

My tweet is in response to this Elizabeth Warren reference to a story were somebody chose the worst of several bad choices:


My tweet is widely misunderstood as saying “here’s a good alternative”, when I meant “here’s a less bad alternative”. Maybe I was wrong and it’s not “less bad”, but nobody has responded that way. All the toxic spew on Twitter has been based on their interpretation that I was asserting it was “good”.

And the reason I chose this particular response is because I thought it was a Democrat talking point. As Bernie Sanders (a 2020 presidential candidate) puts it:

“The original insulin patent expired 75 years ago. Instead of falling prices, as one might expect after decades of competition, three drugmakers who make different versions of insulin have continuously raised prices on this life-saving medication.”

This is called “evergreening”, as described in articles like this one that claim insulin makers have been making needless small improvements to keep their products patent-protected, so that they don’t have to compete against generics whose patents have expired.

It’s Democrats like Bernie who claim expensive insulin is little different than cheaper insulin, not me. If you disagree, go complain to him, not me.

Bernie is wrong, by the way. The more expensive “insulin analogs” result in dramatically improved blood sugar control for Type 1 diabetics. The results are life changing, especially when combined with glucose monitors and insulin pumps. Drug companies deserve to recoup the billions spent on these advances. My original point is still true that “cheap insulin” is better than “no insulin”, but it’s also true that it’s far worse than modern, more expensive insulin.

Anyway, I wasn’t really focused on that part of the argument but the other part, how list prices are an exaggeration. They are a fiction that nobody needs to pay, even those without insurance. They aren’t the result of price gouging by drug manufacturers, as Elizabeth Warren claims. But politicians like Warren continue to fixate on list prices even when they know they are inaccurate.

The culprit for high list prices isn’t the drug makers, but middlemen in the supply chain known as “pharmacy benefits managers” or “PBMs”. Serious politicians focus on PBMs and getting more transparency in the system, populist presidential candidates blame “Big Pharma”.

PBMs negotiate prices between insurers, pharmacies, and drug makers. Their incentive is to maximize the rebates coming back from drug manufacturers. As prices go up, so do rebates, leaving the actual price people pay, and the actual price drug makers earn, unchanged. You can see this in the drug makers’ SEC profit/loss filings. If drug makes are “price gouging”, it’s not showing up on their bottom line.

It’s PBMs that have the market power. The largest PBMs are bigger than the largest drug manufacturers, as the Wikipedia article explains. They are the ones with the most influence on prices.

PBM’s primary customer is insurance companies, but they’ll happily do business with the uninsured. Free drug discount cards are widely available. There’s also websites like GoodRX.com that do the same thing. You don’t need to pay them money, or even sign up with them. Simply go to the site, search for that expensive insulin you need, and print out a free coupon that gives you 50% to 80% off at your local pharmacy.

The story cited by Elizabeth Warren claims the drug in question cost $275, but according to GoodRX, it can be gotten for $68.

This coupon is good for buying lispro at Walgreens in Georgia, maybe elsewhere

Mentioning PBMs is really weird. People haven’t heard of them, don’t understand them, so when you mention them, people don’t hear you. They continue as if you’ve said nothing at all. Yet, they are the most important part of the debate over high drug prices in America.

The point wasn’t to argue drug policy. That’s the underlying misunderstanding here, that I’m arguing either a Democrat or Republican side of the health debate. Instead, I’m arguing against both Republicans and Democrats. I have little opinion on the issue other than I’d like to emulate well-run countries like Singapore or Switzerland. I’m simply pointing out that whenever I investigate politician’s statements, I find inaccuracies, exaggerations, and deliberate deceptions.

Maybe I’m wrong and Warren’s tweet wasn’t exaggerated, but that still doesn’t justify the toxic spew.

What’s interesting about this is how those who most decry toxic behavior on Twitter were among the most toxic in their response. Toxicity isn’t a property of what you do, but of which side you are on when you do it. Threats of violence are only bad when targeting “good” people, not when targeting bad people like me.

This is finally the year of the ARM server

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/12/this-is-finally-year-of-arm-server.html

“RISC” was an important architecture from the 1980s when CPUs had fewer than 100,000 transistors. By simplifying the instruction set, they free up transistors for more registers and better pipelining. It meant executing more instructions, but more than making up for this by executing them faster.

But once CPUs exceed a million transistors around 1995, they moved to Out-of-Order, superscalar architectures. OoO replaces RISC by decoupling the front-end instruction-set with the back-end execution. A “reduced instruction set” no longer matters, the backend architecture differs little between Intel and competing RISC chips like ARM. Yet people have remained fixated on instruction set. The reason is simply politics. Intel has been the dominant instruction set for the computers we use on servers, desktops, and laptops. Many instinctively resist whoever dominates. In addition, colleges indoctrinate students on the superiority of RISC. Much college computer science instruction is decades out of date.

For 10 years, the ignorant press has been championing the cause of ARM’s RISC processors in servers. The refrain has always been that RISC has some inherent power efficiency advantage, and that ARM processors with natural power efficiency from the mobile world will be more power efficient for the data center.

None of this is true. There are plenty of RISC alternatives to Intel, like SPARC, POWER, and MIPS, and none of them ended up having a power efficiency advantage.

Mobile chips aren’t actually power efficient. Yes, they consume less power, but because they are slower. ARM’s mobile chips have roughly the same computations-per-watt as Intel chips. When you scale them up to the same amount of computations as Intel’s server chips, they end up consuming just as much power.

People are essentially innumerate. They can’t do this math. The only factor they know is that ARM chips consume less power. They can’t factor into the equation that they are also doing fewer computations.

There have been three attempts by chip makers to produce server chips to complete against Intel. The first attempt was the “flock of chickens” approach. Instead of one beefy OoO core, you make a chip with a bunch of wimpy traditional RISC cores.

That’s not a bad design for highly-parallel, large-memory workloads. Such workloads spread themselves efficiently across many CPUs, and spend a lot of time halted, waiting for data to be returned from memory.

But such chips didn’t succeed in the market. The basic reason was that interconnecting all the cores introduced so much complexity and power consumption that it wasn’t worth the effort.

The second attempt was multi-threaded chips. Intel’s chips support two threads per core, so that when one thread halts waiting for memory, the other thread can continue processing what’s already stored in cache and in registers. It’s a cheap way for processors to increase effective speed while adding few additional transistors to the chip. But it has decreasing marginal returns, which is why Intel only supports two threads. Vendors created chips with as many as 8 threads per core. Again, they were chasing the highly parallel workloads that waited on memory. Only with multithreaded chips, they could avoid all that interconnect nastiness.

This still didn’t work. The chips were quite good, but it turns out that these workloads are only a small portion of the market.

Finally, chip makers decided to compete head-to-head with Intel by creating server chips optimized for the same workloads as Intel, with fast single-threaded performance. A good example was Qualcomm, who created a server chip that CloudFlare promised to use. They announced this to much fanfare, then abandoned it a few months later as nobody adopted it.

The reason was simply that when you scaled to Intel-like performance, you have Intel-like liabilities. Your only customers are the innumerate who can’t do math, who believe like emperors that their clothes are made from the finest of fabrics. Techies who do the math won’t buy the chip, because any advantage is marginal. Moreover, it’s a risk. If they invest heavily in the platform, how do they know that it’ll continue to exist and keep up with Intel a year from now, two years, ten years? Even if for their workloads they can eke out 10% benefit today, it’s just not worth the trouble when it gets abandoned two years later.

Thus, ARM server processors can be summarized by this: the performance and power efficiencies aren’t there, and without them, there’s no way the market will accept them as competing chips to Intel.

This brings us to chips like Graviton2, and similar efforts at other companies like Apple and Microsoft. I’m pretty sure it is going to succeed.

The reason is the market, rather than the technology.

The old market was this: chip makers (Intel, AMD, etc.) sold to box makers (Dell, HP, etc.) who sold to Internet companies (Amazon, Rackspace, etc.).

However, this market has been obsolete for a while. The leading Internet companies long ago abandoned the box vendors and started making their own boxes, based on Intel chips.

Making their own chips, making the entire computer from the ground up to their specs, is the next logical evolution.

This has been going on for some time, we just didn’t notice. Most all the largest tech companies have their own custom CPUs. Apple has a custom ARM chip in their iPhone. Samsung makes custom ARM chips for their phones. IBM has POWER and mainframe chips. Oracle has (or had) SPARC. Qualcomm makes custom ARM chips. And so on.

In the past, having your own CPU meant having your own design, your own instruction set, your own support infrastructure (like compilers), and your own fabs for making such chips. This is no longer true. You get CPU designs from ARM, then have a fab like TSMC manufacture the chip. Since it’s ARM, you get for free all the rest of the support infrastructure.

Amazon’s Graviton1 chip was the same CPU core (ARM Cortex A72) as found in the Raspberry Pi 4. Their second generation Graviton2 chip has the same CPU core (ARM Cortex A76) as found in Microsoft’s latest Windows Surface notebook computer.

Amazon doesn’t care about instruction set, or whether a chip is RISC. It cares about the rest of the feature of the chip. For example, their chips support encrypted memory, a feature that you might want in a cloud environment that hosts content from many different customers.

Recently, Sony and Microsoft announced their next-gen consoles. Like their previous generation, these are based on custom AMD designs. Gaming consoles have long been the forerunners of this new market: shipping in high enough volumes that they can get a custom design for their chip. It’s just that Amazon, through its cloud instances, is now of sufficient scale, that they can sell as many instances as game consoles.

The upshot is that custom chips are becoming less and less a barrier, just like custom boxes became less of a barrier a decade ago. More and more often, the world’s top tech companies will have their own chip. Sometimes, this will be in partnership with AMD with an x86 chip. Most of the time, it’ll be the latest ARM design, manufactured on TSMC or Samsung fabs. IBM will still have POWER and mainframe chips for their legacy markets. Sometimes you’ll have small microcontroller designs, like Western Digital’s RISC-V chips. Intel’s chips are still very good, so their market isn’t disappearing. However, the market for companies like Dell and HP is clearly a legacy market, to be thought of in the same class as IBM’s still big mainframe market.

CrowdStrike-Ukraine Explained

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/09/crowdstrike-ukraine-explained.html

Trump’s conversation with the President of Ukraine mentions “CrowdStrike”. I thought I’d explain this.

What was said?

This is the text from the conversation covered in this

“I would like you to find out what happened with this whole situation with Ukraine, they say Crowdstrike… I guess you have one of your wealthy people… The server, they say Ukraine has it.”

Personally, I occasionally interrupt myself while speaking, so I’m not sure I’d criticize Trump here for his incoherence. But at the same time, we aren’t quite sure what was meant. It’s only meaningful in the greater context. Trump has talked before about CrowdStrike’s investigation being wrong, a rich Ukrainian owning CrowdStrike, and a “server”. He’s talked a lot about these topics before.


Who is CrowdStrike?

They are a cybersecurity firm that, among other things, investigates hacker attacks. If you’ve been hacked by a nation state, then CrowdStrike is the sort of firm you’d hire to come and investigate what happened, and help prevent it from happening again.

Why is CrowdStrike mentioned?

Because they were the lead investigators in the DNC hack who came to the conclusion that Russia was responsible. The pro-Trump crowd believes this conclusion is false. If the conclusion is false, then it must mean CrowdStrike is part of the anti-Trump conspiracy.

Trump always had a thing for CrowdStrike since their first investigation. It’s intensified since the Mueller report, which solidified the ties between Trump-Russia, and Russia-DNC-Hack.

Personally, I’m always suspicious of such investigations. Politics, either grand (on this scale) or small (internal company politics) seem to drive investigations, creating firm conclusions based on flimsy evidence. But CrowdStrike has made public some pretty solid information, such as BitLy accounts used both in the DNC hacks and other (known) targets of state-sponsored Russian hackers. Likewise, the Mueller report had good data on Bitcoin accounts. I’m sure if I looked at all the evidence, I’d have more doubts, but at the same time, of the politicized hacking incidents out there, this seems to have the best (public) support for the conclusion.

What’s the conspiracy?

The basis of the conspiracy is that the DNC hack was actually an inside job. Some former intelligence officials lead by Bill Binney claim they looked at some data and found that the files were copied “locally” instead of across the Internet, and therefore, it was an insider who did it and not a remote hacker.

I debunk the claim here, but the short explanation is: of course the files were copied “locally”, the hacker was inside the network. In my long experience investigating hacker intrusions, and performing them myself, I know this is how it’s normally done. I mention my own experience because I’m technical and know these things, in contrast with Bill Binney and those other intelligence officials who have no experience with such things. He sounds impressive that he’s formerly of the NSA, but he was a mid-level manager in charge of budgets. Binney has never performed a data breach investigation, has never performed a pentest.

There’s other parts to the conspiracy. In the middle of all this, a DNC staffer was murdered on the street, possibley due to a mugging. Naturally this gets included as part of the conspiracy, this guy (“Seth Rich”) must’ve been the “insider” in this attack, and must’ve been murdered to cover it up.

What about this “server”?

Conspiracy theorists have become obsessed with servers. The anti-Trump crowd believes in a conspiracy involving a server in Trump Tower secretly communicating with a bank in Russia (which I’ve debunked multiple times). There’s also Hillary’s email server.

In this case, there’s not really any particular server, but that the servers in general were mishandled. They postulate that one of them must exist that explains the “Truth” of what really happened, and that it’s being covered up.

The pro-Trump conspiracy believes that it’s illegitimate that CrowdStrike investigated the DNC hack and not the FBI — that the FBI only got involved after CrowdStrike, and relied mostly on CrowdStrike’s investigations. This is bogus. CrowdStrike has way more competency here than the FBI, and access to more data. It’s not that the FBI it useless, but if you were a victim of a nation-state hack, you’d want CrowdStrike leading the investigation and not the FBI.

The pro-Trump crowd believes the FBI should’ve physically grabbed the servers. That’s not how such investigations work. If you are a criminal, yes they take your computer. If you are the victim, then no — it just victimizes you twice, once when the criminal steals your data, and a second time when the FBI steals your computer.

Instead, servers are “imaged”, they take a copy of what was in memory and on the disk. There’s nothing investigator want more than an image. Indeed, when they take them from suspected criminals, it’s a subtle form of punishment and abuse (like “civil asset forfeiture”) rather than a specific need.

What’s the Ukraine connection?

Because Ukraine is the ground zero in the world’s cyberwar.

Russia officially occupies one part of the Ukraine (the Crimea) and unofficially occupies the eastern part of the country with strong Russian speaking minorities. By “unofficially” it means that it’s largely a private occupation with Russian oligarchs buying weapons for separatists in those areas. It’s a big debate about how much Putin and the Russian government is involved.

Part of this armed conflict is the cyber conflict. Russian hackers are thoroughly hacking Ukraine. The notPetya virus/worm that caused billions of dollars of damage a couple years ago is just one part of this conflict.

There is occasional reporting of this in the mainstream media, such as noPetya or when Russian hackers successfully hacked the Ukraine power grid, but if anything, the whole conflict is underreported. Russia’s cyberwar with Ukraine is the most important thing going in our field at the moment.

As such, all major cybersecurity firms are involved in working with Ukraine. That includes CrowdStrike. In particular, they came out with a report about Russians hacking an Android app used to control Ukraine artillery.

Like many such reports, it appears to have had errors and to have overstated its case, and CrowdStrike got lots of criticism. This feeds into the conspiracy theories.

In any case, this means that CrowdStrike (like every big company) has ties to Ukraine that’ll get pulled into any conspiracy theory.

Who is this rich Ukrainian, and do they own CrowdStrike?

CrowdStrike is public company with a long list of American venture capitalists, including Google’s investment arm. Nobody believes there’s a single rich person who owns it.

But of course conspiracy theorists believe in conspiracies, that it’s all a front, and that there’s somebody secretly behind the scenes controlling what’s really going on. I point this out because I’ve read numerous articles trying to debunk this by proving who really does own CrowdStrike. This misses the point: it’s not about who actually does own the company, but who is secretly behind the scenes.

Both the founder of CrowdStrike’s cofounder Dmitri Alperovitch and Ukraine oligarch Victor Pinchuk are involved with a think tank know as the Atlantic Council. As far as I can, that appears as much ties in the conspiracy that anybody can come up with.

Who are “they” and “everyone”?

When Trump talks about such things, he frequently cites unknown persons, “they say” or “everyone here is talking about”:

Trump surrounds himself with yes-men, judged by their loyalty rather than their competence. He’s not at the forefront of spouting conspiracy theories of his own, but he certainly rewards others for their conspiracy theories — as long as they are on his side.

I mention this because, for example, Binney’s evidence of the “insider” is wholly and obviously bogus, but there’s no fighting it. It’s a rock solid part of Trump’s narrative and nothing I can say will ever convince conspiracy theorists otherwise.

If Trump gets impeached, or if he loses the 2020 election, it’ll be because illegitimate forces are out to get him. And he knows this because “everyone” around him agrees with him. Because if you disagreed, you wouldn’t be around him.

That outright conspiracy theories go all the way to the top is extremely troublesome.

Conclusion

The tl;dr is that CrowdStrike investigated the DNC hacking incident, Trump disagrees with their conclusion that Russia was responsible, and thus has a thing for CrowdStrike. Everything Trump hates is involved in the Grand Conspiracy against him. It’s really no more complicated than that.

Thread on the OSI model is a lie

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/08/thread-on-osi-model-is-lie.html

I had a Twitter thread on the OSI model. Below it’s compiled into one blogpost

Yea, I’ve got 3 hours to kill here in this airport lounge waiting for the next leg of my flight, so let’s discuss the “OSI Model”. There’s no such thing. What they taught you is a lie, and they knew it was a lie, and they didn’t care, because they are jerks.
You know what REALLY happened when the kid pointed out the king was wearing no clothes? The kid was punished. Nobody cared. And the king went on wearing the same thing, which everyone agreed was made from the finest of cloth.
The OSI Model was created by international standards organization for an alternative internet that was too complicated to ever work, and which never worked, and which never came to pass.
Sure, when they created the OSI Model, the Internet layered model already existed, so they made sure to include today’s Internet as part of their model. But the focus and intent of the OSI’s efforts was on dumb networking concepts that worked differently from the Internet.
OSI wanted a “connection-oriented network layer”, one that worked like the telephone system, where every switch in between the ends knows about the connection. The Internet is based on a “connectionless network layer”.
Likewise, the big standards bodies wanted a slightly different way of how Ethernet should work, with an LLC layer on top of Ethernet. That never came to pass. Well, an LLC layer exists in WiFi packets, but as a vestigial stub like an appendix.
So layers 1 – 4 are at least a semblance of reality, incorporating Ethernet and TCP/IP, but it’s layers 5 – 6 where is goes off the rails. There’s no Session or Presentation Layer in modern networks.
Sure, the concepts exist, but not as layers, and not with the functionality those layers envisioned.
For example, the Session Layer wanted “synchronization points” to synchronize transactions. Their model never worked, and how synchronization happens on the Internet is vastly more complex, with pretty much everybody designing their own method.
For example, how Google does Paxos synchronization at scale is a big reason for their success. It’s an incredibly tough problem for which it’s impractical to create a standard. In any case, you wouldn’t want it as a “layer”.
Sure, HTTP has “session cookies” and SSL has a “session” concept, but that doesn’t make these “session layer” protocols.
The OSI Presentation Layer (layer 6) is even more stupider. It was based on dumb terminals connected to mainframes. It was laughably out-of-date before it was even created. Back then, terminals needed to negotiate control codes and character sets.
It’s not simply “dumb terminals”, it’s the fact most everyone was still stuck on the concept that computer networks were for human-computer communications, rather than computer-computer communications.
The OSI Model they teach is a retconned (retroactive continuity) one that just teaches the TCP/IP model and calls it the OSI Model, and does major handwaving over the non-existent Session and Presentation layers.
Intermission: As a side not to this thread, let me answer this. It’s because Netscape invented SSL, and Microsoft hated Netscape, so forced the standards body to change the name to TLS.

Sure, HTTP has “session cookies” and SSL has a “session” concept, but that doesn’t make these “session layer” protocols.
I’ve never understand why the “Secure *Socket Layer*” was renamed to “*Transport Layer* Security” in the new version published in 1999, yet most people still seem to refer to it as “SSL” (including Qualsys!)

It’s the same reason the French insist that “ISO” stands for “International Organization for Standardization”. I don’t put up with that nonsense, because I’m a troll.

You know what REALLY happened when the kid pointed out the king was wearing no clothes? The kid was punished. Nobody cared. And the king went on wearing the same thing, which everyone agreed was made from the finest of cloth.
The OSI Model was created by international standards organization for an alternative internet that was too complicated to ever work, and which never worked, and which never came to pass.

So back to our story. I suppose “OSI Model” can be justified if if everyone taught the same thing, if it were all based on the same specification. But it isn’t. Everyone makes up their own version, like which where to put SSL. (The correct answer is “Transport Layer”, btw).
As for the question “in which layer does encryption belong?”, the correct answer is “all the layers”. And then some.
So this is a myth. The DoD mandated GOSIP, it never mandated TCP/IP. I mean, they did mandate working systems. Since GOSIP never worked, and TCP//IP was the only working alternative, that sorta mandated it.

Yea, I’ve got 3 hours to kill here in this airport lounge waiting for the next leg of my flight, so let’s discuss the “OSI Model”. There’s no such thing. What they taught you is a lie, and they knew it was a lie, and they didn’t care, because they are jerks.
Oh, but there was the whole GOSSIP stack that implemented it and was taken seriously – ’till the DoD mandated TCP/IP .

What happened is that shipping systems came with an OSI stack that sometimes would get communication between two systems if they were the same vendor, but also TCP/IP for when things had to work.
You still see OSI nonsense in industrial control systems (port 102 = OSI Transport Layer on top of TCP). That’s because regulatory bodies are stronger in those areas, able to force bad ideas on people no matter how unworkable.
Morons call for “realpolitik”, that we could solve problems if only government had the will to overcome objections. But a government with enough political power to overcome objections is how we get bad ideas like OSI.
My first time pentesting a powerplant was sniffing traffic, finding TCP/102 …. and within an hour having an ASN.1 buffer overflow in a critical protocol that crossed firewalls.
So let’s discuss X.509 and LDAP, which both technically descend from the OSI standards bodies. DAP was a typical bloated, unimplementable OSI protocol, so that’s why we have “Lightweight DAP” or “LDAP”.

My first time pentesting a powerplant was sniffing traffic, finding TCP/102 …. and within an hour having an ASN.1 buffer overflow in a critical protocol that crossed firewalls.
This rant seems incomplete with some mention of The Directory and its surviving vestiges like X.509 and LDAP.

X.509 was a typical OSI standard written to serve the interests of big vendors instead of customers, who wanted to charge lots of money for certificates, hindering the adoption of encryption until LetsEncrypt put a stop to that nonsense TWO DECADES later.
You millennials have no concept how freakin’ long two decades is, and how that’s an unreasonable amount of time to not have free certificates for websites.
Here’s what you post-millenials/Gen-Z/whatever need to do. When you are in class and they start teaching the OSI model, stand up and shout “THIS IS BS” and walk out of the room. Better yet, organize your classmates to follow you.
“What is the OSI Model?” It’s the fact that the local network is independent from the Internet, and the Internet is independent of the applications that run on top of it. It’s the fact you can swap WiFi for Ethernet, or IPv6 for IPv4, or Signal for WatsApp

Yea, I’ve got 3 hours to kill here in this airport lounge waiting for the next leg of my flight, so let’s discuss the “OSI Model”. There’s no such thing. What they taught you is a lie, and they knew it was a lie, and they didn’t care, because they are jerks.
I did not understand everything so please correct me if I’m wrong – is OSI model a bunch of poorly separated responsibilities?

When we eventually move to IPv7, we won’t need to upgrade Ethernet switches. Ethernet and WiFi have no clue what are doing on top of them. Ancient alternatives like XNS or Novel or NetBEUI also work fine on the latest 802.11ax/WiFi6 router you just bought.
There are a few more subdivisions. Layer 1 (Physical) gets the raw bits transmitted on the wire (or into air). Layer 2 (Link) gets packets across your local network to the next router. Layer 3 (IPv4/IPv6) gets packets from one end of the Internet to the other.
Layer 4 (TCP/UDP) gets packets to one of many apps running on your machine to one of many apps running on the server. It may also retransmit lost packets. Layer 7 consists of a bunch of different protocols that services those apps.
No, the OSI Model doesn’t have its place. You can teach how layered networking works without teaching the OSI version. The OSI version messes it up rather than clarifying things.

Layers existed before the OSI Model. They didn’t invent the idea. They coopted and changed the idea. When you redefine it back again, you only confuse students. They can pass your test, but no some other test like the CISSP, because the answers won’t match. Because it’s made up.

Thread on network input parsers

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/08/thread-on-network-input-parsers.html

This blogpost contains a long Twitter thread on input parsers. I thought I’d copy the thread here as a blogpost.


I am spending far too long on this chapter on “parsers”. It’s this huge gaping hole in Computer Science where academics don’t realize it’s a thing. It’s like physics missing one of Newton’s laws, or medicine ignoring broken bones, or chemistry ignoring fluorine.
The problem is that without existing templates of how “parsing” should be taught, it’s really hard coming up with a structure for describing it from scratch.
“Langsec” has the best model, but at the same time, it’s a bit abstract (“input is a language that drives computation”), so I want to ease into it with practical examples for programmers.
Among the needed steps is to stamp out everything you were taught in C/C++ about pointer-arithmetic and overlaying internal packed structures onto external data. Big-endian vs. little-endian isn’t confusing — it’s only made confusing because you were taught it wrongly.
Hmmm. I already see a problem with these tweets. People assume I mean “parsing programming languages”, like in the Dragon book. Instead, I mean parsing all input, such as IP headers, PDF files, X.509 certificates, and so on.
This is why parsing is such a blindspot among academics. Instead of studying how to parse difficult input, they’ve instead designed input that’s easy to parse. It’s like doctors who study diseases that are easy to cure instead of studying the ones hard to cure.
Parsing DNS is a good example. In a DNS packet, a name appears many times. Therefore, a “name” appearing a second time can instead just point to the first instance. This is known as “DNS name compression”. In this packet, “google.com” exists only once.
Any abstract model for “parsing” you come up with doesn’t include this compression step. You avoid the difficult parts to focus on the easy parts. Yet, this DNS compression feature is a common source of parser errors.
For example, you can create an infinite loop in the parser as a name points to another name that points back again. Multiple CVE vulnerabilities are due to this parsing bug.
As this tweet describes, “regex” is “hope-for-the-best” parsing. This is how most programmers approach the topic. Instead of formally parsing things correctly, they parse things informally and incorrectly, “good enough” for their purposes.

Things designed to solve the parsing problem haven’t, like XML, ASN.1, or JSON. First of all, different implementations parse the raw documents differently. Secondly, it’s only the first step, with lots remaining to be parsed.

This is the enormous flaw of RPC or “Remote Procedure Call”, that promised to formally parse everything so that the programmer didn’t have to do it themselves. They could call remote functions the same as internal local ones with local data.
This lead to numerous exploits and worms of SunRPC on Unix systems at the end of the 1990s and exploits/worms of MS-RPC at the start of the 2000s. Because it’s a fallacy: external data is still external data that needs to be parsed.
For example, the “Blaster” bug was because a Windows computer name can’t exceed 16 characters. Since no internal code can generate longer names, internal code doesn’t keep double-checking the length, but trusts that it’ll never be more than 16 characters.
But then two internal components are on either side of RPC — remote from each other, where a hacker can change the bytes to create a longer name. Then you have a buffer-overflow, as an internal component receives what it thinks is an internally generated name, but isn’t.
This is a general problem: programmers don’t know what’s internal and what’s external. That’s the problem with industrial control systems, or cars. They define the local network as trustworthy and “internal” to the system, when it should instead be defined as “external”.
In other words, the Jeep hackers went from the Internet into the car’s entertainment system, and once there, were able to easily hack other components across the CAN bus, because systems treated input from the CAN bus as “internal” to the car, instead of “external” to the system.
Eventually we’ll need to move to “memory safe” languages like Rust, because programmers can never trust a buffer size is what it claims to be. In the meanwhile, all C/C++ code needs to respect the same sort of boundary.
That means we need to pass around the buffer size along with the buffer in API calls. It means that we need to pretend our language is memory safe. That means no more pointer arithmetic, and no more overlaying internal packed structures on top of external data then byte-swapping.
I know, I know. When you learned about “htons()” and “ntohl()” in your Network Programming class, you had an epiphany about byte-order. But you didn’t. It was wrong. You are wrong. They violate type safety. You need to stop using them.
Some jerk of a programmer working for BBN wrote these macros when adding TCP/IP to BSD in 1982 and we’ve suffered with them ever since. They were wrong then (RISC processors crash on unaligned integer access), and doubly wrong today (violates type safety).
You need to think of htons()/ntohl() macros are as obsolete as sprintf(), a function well-known for overflowing buffers because it has no ability to bounds-check, no concept of the length of a buffer it’s writing into.
Pointer arithmetic is also wrong. You were taught that this is idiomatic C, and you should always program in a programming language’s own idioms instead of using another programming language’s idioms (“A Fortran programmer can program in Fortran using any language”).
But pointer arithmetic isn’t idiomatic C, it’s idiotic C. Stop doing it. It leads to spaghetti code that really hard for programmers to reason with, and also hard for static analyzers to reason with.
As architects, we aren’t building new castles. We spend most of our time working with existing castles, trying to add electricity, elevators, and bringing them up to safety code. That’s our blindspot: architects only want to design new buildings.

Let’s talk about C for a moment. The Internet is written in C. That’s a roundabout way of stating that C predates the Internet. The C programming language was designed in an era were “external” data didn’t really exist.
Sure, you had Baudot and ASCII, standard external representation of character sets. However, since computers weren’t networked and “external” wasn’t an issue, most computers had their own internal character set, like IBM’s EBDIC.
You see a lot of HTTP code that compares the headers against strings like “GET” or “POST” or “Content-Type”. This doesn’t compile correctly on IBM mainframes, because you assumed the language knew it was ASCII, when in fact C doesn’t, and it could be EBDIC.
Post-Internet languages (e.g. Java) gets rid of this nonsense. A string like “GET” in Java means ASCII (actually, Unicode). In other words, the letter “A” in Java is always 0x41, whereas in C it’s only usually 0x41 but could be different if another charset is used.
Pre-internet, it was common to simply dump internal data structures to disk, because “internal” and “external” formats were always the same. The code that read from the disk was always the same as the code the wrote to the disk.
This meant there was no “parsing” involved. These days, files and network headers are generally read by different software than that which wrote them — if only different versions of the software. So “parsing” is now a required step that didn’t exist in pre-Internet times.
I’m reminded that in the early days, the C programming language existed in less than 12k of memory. My partial X.509 parser compiles to 14k. Computers weren’t big enough for modern concepts of parsing.

Many want “provably correct” and “formal verification” of parsers. I’m not sure that’s a thing. The problem we have is that the things we are parsing have no format definition, and thus no way to prove if they are correct.

You think they formally defined. But then you try to implement them, and then you get bug reports from users who send you input that isn’t being parsed correctly, then you check the formal spec and realize it’s unclear how the thing should be parsed.
And this applies even when it’s YOU WHO WROTE THE SPEC. This is a problem that plagues Microsoft and the file/protocols they defined, realizing that when their code and specs disagree, it means the spec is wrong.
This also applies to Bitcoin. It doesn’t matter what the spec says, it matters only what the Bitcoin core code executes. Changes to how the code parses input means a fork in the blockchain.
Even Satoshi (pbuh) failed at getting parsers right, which is why you can’t have a Bitcoin parser that doesn’t use Bitcoin core code directly.
That issue is “parser differentials”, when two different parsers disagree. It’s a far more important problem than people realize that my chapter on parsers will spend much time on.




This is another concept I’ll have to spend much time on. Parsing was flexible in the early days (1970s/1980s) when getting any interoperability was a big accomplishment. Now parsers need to be brittle and reject ambiguous input.

In the past, programmers didn’t know how to format stuff correctly, so your parser had to handle ambiguous input. But that mean how your popular parser handled input became part of the unwritten spec — parts everyone else struggled with to be compatible.
Modern work, from HTML to XML to DNS now involves carefully deprecating past ambiguous parsing, forcing parsers to be much stricter, defining correct behavior as rejecting any ambiguity.
In other words, in the old thinking, the parser should recognize this a vote for Franken, because that’s clearly the intent. In the new thinking, the parser should reject this input for not following the rules (two bubbles filled in).
I’m not including this in my chapter on “Parsers” only because it gets it own chapter on “State Machine Parsers”.

Among the benefits of non-backtracking state-machine DFA parsers is that they don’t require reassembly of fragments. Reassembly is the source of enormous scalability and performance problems. You can see examples of how they work in masscan:

Which brings me to a discussion of “premature optimizations”, an unhelpful concept. Those who don’t want to do state-machine parsers will cite this concept as a reason why you shouldn’t do them, why you should worry about reassembly performance later.
On the other hand, prescriptions against premature optimization won’t stop you from doing pointer-arithmetic or structure/integer overlays, because you will defend such approaches as improving the performance of a program. They don’t, by the way.
Another unhelpful concept is the prescription against “magic numbers”. Actually, they are appropriate for parsers. “ipver == 4” is often much better than “ipver == IPV4”. If the source of a number is a separate spec, then defining the constant with a string has little benefit.
When I read the code, I have the specification (like the RFC) next to me. The code should match the spec, which often means magic numbers. If I have to hunt down things defined elsewhere, I get very annoyed, even in a good IDE where it’s just a few clicks away.
Don’t get me wrong. An enum {} of all possible values for a field, culled from multiple specifications and even non-standard vendor implementations, is also useful. The point is simply: don’t get uptight about magic numbers.
(Sorry for the length of this tweet storm — at this point I’m just writing down notes of everything that needs to go into this chapter, hoping people will discuss things so I get it right).

Hacker Jeopardy, Wrong Answers Only Edition

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/08/hacker-jeopardy-wrong-answers-only.html

Among the evening entertainments at DEF CON is “Hacker Jeopardy”, like the TV show Jeopardy, but with hacking tech/culture questions. In today’s blog post, we are going to play the “Wrong Answers Only” version, in which I die upon the hill defending the wrong answer.

The problem posed is:

YOU’LL LIKELY SHAKE YOUR HEAD WHEN YOU SEE TELNET AVAILABLE, NORMALLY SEEN ON THIS PORT

Apparently, people gave 21, 22, and 25 as the responses. The correct response, according to RFC assignments of well-known ports, is 23.
But the real correct response is port 21. The problem posed wasn’t about which port was assigned to Telnet (port 23), but what you normally see these days. 

Port 21 is assigned to FTP, the file transfer protocol. A little known fact about FTP is that it uses Telnet for it’s command-channel on port 21. In other words, FTP isn’t a text-based protocol like SMTP, HTTP, POP3, and so on. Instead, it’s layered on top of Telnet. It says so right in RFC 959:
When we look at the popular FTP implementations, we see that they do indeed respond to Telnet control codes on port 21. There are a ton of FTP implementations, of course, so some don’t respond to Telnet (treating the command channel as a straight text protocol). But the vast majority of what’s out there are implementations that implement Telnet as defined.
Consider network intrusion detection systems. When they decode FTP, they do so with their Telnet protocol parsers. You can see this in the Snort source code, for example.
The question is “normally seen”. Well, Telnet on port 23 has largely been replaced by SSH on port 22, so you don’t normally see it on port 23. However, FTP is still popular. While I don’t have a hard study to point to, in my experience, the amount of traffic seen on port 21 is vastly higher than that seen on port 23. QED: the port where Telnet is normally seen is port 21.
But the original problem wasn’t so much “traffic” seen, but “available”. That’s a problem we can study with port scanners — especially mass port scans of the entire Internet. Rapid7 has their yearly Internet Exposure Report. According to that report, port 21 is three times as available on the public Internet as port 23.
So the correct response to the posed problem is port 21! Whoever answered that at Hacker Jeopardy needs to have their score updated to reflect that they gave the right response.
Prove me wrong. 

Securing devices for DEFCON

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/08/securing-devices-for-defcon.html

There’s been much debate whether you should get burner devices for hacking conventions like DEF CON (phones or laptops). A better discussion would be to list those things you should do to secure yourself before going, just in case.

These are the things I worry about:
  • backup before you go
  • update before you go
  • correctly locking your devices with full disk encryption
  • correctly configuring WiFi
  • Bluetooth devices
  • Mobile phone vs. Stingrays
  • USB
Backup

Traveling means a higher chance of losing your device. In my review of crime statistics, theft seems less of a threat than whatever city you are coming from. My guess is that while thieves may want to target tourists, the police want to even more the target gangs of thieves, to protect the cash cow that is the tourist industry. But you are still more likely to accidentally leave a phone in a taxi or have your laptop crushed in the overhead bin. If you haven’t recently backed up your device, now would be an extra useful time to do this.
Anything I want backed up on my laptop is already in Microsoft’s OneDrive, so I don’t pay attention to this. However, I have a lot of pictures on my iPhone that I don’t have in iCloud, so I copy those off before I go.
Update

Like most of you, I put off updates unless they are really important, updating every few months rather than every month. Now is a great time to make sure you have the latest updates.
Backup before you update, but then, I already mentioned that above.

Full disk encryption

This is enabled by default on phones, but not the default for laptops. It means that if you lose your device, adversaries can’t read any data from it.
You are at risk if you have a simple unlock code, like a predicable pattern or a 4-digit code. The longer and less predictable your unlock code, the more secure you are.
I use iPhone’s “face id” on my phone so that people looking over my shoulder can’t figure out my passcode when I need to unlock the phone. However, because this enables the police to easily unlock my phone, by putting it in front of my face, I also remember how to quickly disable face id (by holding the buttons on both sides for 2 seconds).
As for laptops, it’s usually easy to enable full disk encryption. However there are some gotchas. Microsoft requires a TPM for its BitLocker full disk encryption, which your laptop might not support. I don’t know why all laptops don’t just have TPMs, but they don’t. You may be able to use some tricks to get around this. There are also third party full disk encryption products that use simple passwords.
If you don’t have a TPM, then hackers can brute-force crack your password, trying billions per second. This applies to my MacBook Air, which is the 2017 model before Apple started adding their “T2” chip to all their laptops. Therefore, I need a strong login password.
I deal with this on my MacBook by having two accounts. When I power on the device, I log into an account using a long/complicated password. I then switch to an account with a simpler account for going in/out of sleep mode. This second account can’t be used to decrypt the drive.
On Linux, my password to decrypt the drive is similarly long, while the user account password is pretty short.
I ignore the “evil maid” threat, because my devices are always with me rather than in the hotel room.
Configuring WiFi

Now would be a good time to clear out your saved WiFi lists, on both your laptop and phone. You should do this regularly anyway. Anything that doesn’t include a certificate should be removed. Your device will try to connect to known access-points, and hackers will setup access points with those names trying to entrap you.
If you want to use the official DEF CON WiFi, they provide a certificate which you can grab and install on your device. Sadly, it’s not available right now. It’s available now. The certificate authenticates the network, so that you won’t be tricked into connecting to fake/evil-twin access points.
You shouldn’t connect via WiFi to anything for which you don’t have a certificate while in Vegas. There will be fake access points all over the place. I monitor the WiFi spectrum every DEF CON and there’s always shenanigans going on. I’m not sure exactly what attacks they are attempting, I just know there’s a lot of nonsense going on.
I also reset the WiFi MAC address in my laptop. When you connect to WiFi, your MAC address is exposed. This can reveal your identity to anybody tracking you, so it’s good to change it. Doing so on notebooks is easy, though I don’t know how to do this on phones (so I don’t bother).
Bluetooth trackers

Like with WiFi MAC addresses, people can track you with your Bluetooth devices. The problem is chronic with devices like headphones, fitness trackers, and those “Tile” devices that are designed to be easily tracked.
Your phone itself probably randomizes its MAC address to avoid easy tracking, so that’s less of a concern. According to my measurements, though, my MacBook exposes its MAC address pretty readily via Bluetooth.
Instead of merely tracking you, hackers may hack into the devices. While phones and laptops are pretty secure against this threat (with the latest updates applied), all the other Bluetooth devices I play with seem to have gapping holes just waiting to be hacked. Your fitness tracker is likely safe walking around your neighborhood, but people at DEFCON may be playing tricks on it.
Personally, I’m bringing my fitness tracker on the hope that somebody will hack it. The biggest threat is loss of the device, or being tracked. It’s not that they’ll be able to hack into my bank account or something.
Mobile phone vs. Stingrays

In much the same way the DEF CON WiFi is protected against impersonation, the mobile network isn’t. Anybody can setup evil twin cell towers and intercept your phone traffic. The insecurity of the mobile phone network is pretty astonishing, you can’t protect yourself against it.
But at least there’s no reason to believe you are under any worse threat at DEF CON. Any attempt to setup interception devices by attendees will quickly bring down the Feds (unless, of course, they do it in the 900 MHz range).
I install apps on my phone designed to track these things. I’m not diligent at it, but I’ve never seen such devices (“Stringrays” or “IMSI Catchers”) at DEF CON, operated either by attendees or the Feds.


USB

Mousejacking is still a threat, where wireless mouse/keyboard dongles can be hijacked. So don’t bring those.
Malicious USB devices that people connect to your computer are a threat. A good example is the “USB Rubber Ducky” device. Some people disable USB entirely. Others use software to “whitelist” which devices can be plugged in. I largely ignore this threat.

Note that a quick google of “disable USB” leads to the wrong device. They are focused on controlling thumbdrives. That’s not really the threat. Instead, the the threat is things like network adapters that will redirect network traffic to/from the device, and enable attacks that you think you are immune to because you aren’t connected to a network.

Summary

I’ve probably forgotten things on this list. Maybe I’ll update this later when people point out the things I missed.
If you pay attention to WiFi, Bluetooth, and full disk encryption, you are likely fine.
You are still in danger from other minor shenanigans, like people tracking you.
There are still some chronic problems, like mobile network or USB security, but at the same time, they aren’t big enough threats for me to worry about.

Why we fight for crypto

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/07/why-we-fight-for-crypto.html

This last week, the Attorney General William Barr called for crypto backdoors. His speech is a fair summary of law-enforcement’s side of the argument. In this post, I’m going to address many of his arguments.

The tl;dr version of this blog post is this:

  • Their claims of mounting crime are unsubstantiated, based on emotional anecdotes rather than statistics. We live in a Golden Age of Surveillance where, if any balancing is to be done in the privacy vs. security tradeoff, it should be in favor of more privacy.
  • But we aren’t talking about tradeoff with privacy, but other rights. In particular, it’s every much as important to protect the rights of political dissidents to keep some communications private (encryption) as it is to allow them to make other communications public (free speech). In addition, there is no solution to their “going dark” problem that doesn’t restrict the freedom to run arbitrary software of the user’s choice on their computers/phones.
  • Thirdly, there is the problem of technical feasibility. We don’t know how to make backdoors available for law enforcement access that doesn’t enormously reduce security for users.


Balance

The crux of his argument is balancing civil rights vs. safety, also described as privacy vs. security. This balance is expressed in the constitution by the Fourth Amendment. The 4rth doesn’t express an absolute right to privacy, but allows for police to invade your privacy if they can show an independent judge that they have “probable cause”. By making communications “warrant proof”, encryption is creating a “law free zone” enabling crime to be conducted without the ability of the police to investigate.

It’s a reasonable argument. If your child gets kidnapped by sex traffickers, you’ll be demanding the police do something, anything to get your child back safe. If a phone is found at the scene, you’ll definitely want them to have the ability to decrypt the phone, as long as a judge gives them a search warrant to balance civil liberty concerns.

However, this argument is wrong, as I’ll discuss below.

Law free zones

Barr claims encryption creates a new “law free zone … giving criminals the means to operate free of lawful scrutiny”. He pretends that such zones never existed before.

Of course they’ve existed before. Attorney-client privilege is one example, which is definitely abused to further crime. Barr’s own boss has committed obstruction of justice, hiding behind the law-free zone of Article II of the constitution. We are surrounded by legal loopholes that criminals exploit in order to commit crimes, where the cost of closing the loophole is greater than the benefit.

The biggest “law free zone” that exists is just the fact that we don’t live in a universal surveillance state. I think impure thoughts without the police being able to read my mind. I can whisper quietly in your ear at a bar without the government overhearing. I can invite you over to my house to plot nefarious deeds in my living room.

Technology didn’t create these zones. However, technological advances are allowing police to defeat them.

Business’s have security cameras everywhere. Neighborhood associations are installing license plate readers. We are putting Echo/OkGoogle/Cortana/Siri devices in our homes listening to us. Our phones and computers have microphones and cameras. Our TV’s increasingly have cameras and mics, too, in case we want to use them for video conferencing, or give them voice commands.

Every argument Barr makes about crypto backdoors applies to backdoor access to microphones, every arguments applies to forcing TVs to have a backdoor allowing police armed with a warrant to turn on the camera in your living room. These are all law-free zones that can be fixed with backdoors. As long as the police get a warrant issued upon probable cause, every such invasion of privacy is justified in their logic.

I mention your TV specifically because this what George Orwell portrays in his book 1984. The book’s opening is about Winston Smith using the “law free zone” of the small alcove in his living room that’s just outside the TV camera pickup, allowing his seditious writing in a diary. This was supposed to be fanciful fiction of something that would never happen in the future, but it’s exactly what’s happening now.

Law free zones already exist because we don’t live in a surveillance state. Yes we want police to stop crime, but not so much that we want to wear a collar around our neck recording everything we say, tracking our every movement with GPS. Barr’s description of the problem is a pretense that technology created such zones, when the reality is that technology created a way to invade such zones. He’s not asking to restore a balance, but is instead asking for unbalanced universal surveillance. Every one of his arguments for crypto backdoors apply to these other backdoors as well.

The phone company

Barr makes the point that we regularly mandate companies to change their products in the public interest, and that’s all he’s asking for here. But that’s not what he’s asking.

Historically, telecommunications (the plain old telephone system) was managed by the government as a utility in the public interest. The government would frequently regulate the balance of competing interests. From this point of view, the above legal argument makes a lot of sense — all that law enforcement is asking for is this sort of balance.

However, the Internet is not that sort of public utility. What makes the Internet different than the older phone system is the “end-to-end principle”, first expressed in the 1970s. In the old days, the phone company was responsible for the apps you ran on your devices. With the Internet, the phone company no longer does apps, but only transmits bits. End-to-end encryption is integrated with the apps, not with the phone service.

Scene from 2001 A Space Oddyssy

Consider pre-Internet sci-fi. They frequently showed people making video phone calls and being charged an absurdly (for the time) low price of only $1.70 by the phone company.

But that’s not how things have turned out. The phone company has no video phones. AT&T does not charge you for making a video phone call on their network. Moreover, $1.70 is an absurdly high price. I frequently make 1080p hi-def video calls to Japan and it costs nothing.

Barr’s speech talks about a Mexican drug cartel using WhatsApp’s end-to-end encryption to defeat wiretaps when planning the murders of politicians. That’s an app by Facebook, one of the top 5 corporations in the world, and something easy for governments to regulate. However, WhatsApp’s end-to-end technology is based on Signal, which is free software controlled by nobody. If Barr succeeds in backdooring WhatsApp then all that means is drug cartels will switch to Signal.

At this point, no amount of regulating corporations will fix the problem. Signal is what’s known as “open-source”. Anybody can download it for free, either that specific version, or their own version with any features removed.

To regulate this, government will have to instead regulate individuals not corporations or public utilities. They would have to ban unlicensed software that people create themselves. App stores, like that from Apple, would include government review of what’s legal or not. Jailbreaking or installing software outside an app store would be illegal.

In other words, we aren’t talking about a slight rebalancing by regulating Facebook, we are talking about an enormously unbalanced cyber dystopia taking away a fundamental right of the people to run software on their computers that they write themselves. Signal is no harder to use than WhatsApp. It’s absurd thinking Mexican drug cartels wouldn’t just switch to Signal if WhatsApp were backdoored.

Barr pretends the balance is expressed in the Fourth Amendment, but from this perspective, it’s the Third Amendment that’s important. That’s the one forbidding quartering troops in our homes. Barr describes CALEA requiring telephone switches to allow wiretaps. But that’s regulating a public utility, which in colonial times, would be akin to the streets, sewers, or water supply. What backdoors demand doesn’t affect the utilities, but the phones in our hands, owned by us and not the utility. Barr demands that we, the consumers, can no longer choose what software we run on the device. We must instead “quarter” government software on our personal devices.

I’m glad Barr brings up Mexican drug cartels using WhatsApp to evade wiretaps to murder and pillage. It sounds like a convincing argument for his side, because it means only small regulation of Facebook to achieve the goal. But since the cartels would obviously switch to Signal in response, we are confronted with what crypto backdoors really mean: a massive overhaul of human rights.

The world is end-to-end. That’s the design of the Internet protocol from the 1970s that makes it different from the phone company. It’s the design of crypto today. There is no way for Barr to achieve “balance” without destruction of this basic principle.

Two tier crypto

Barr claims that consumers don’t need strong crypto. After all, consumers are just protecting messages to friends, not nuclear launch codes.

This is fallacy well known to cryptographers, the belief in two tiers of encryption, a “consumer level” and “military grade”, that one is weaker than the other. This is a cliche people learn from watching too much TV. Such tiers don’t exist.

20 years ago our government tried to weaken crypto by limiting keys to 40-bits for export to the rest of the world, while allowing 128-bits for U.S. citizens. That was their way then for retaining their ability to spy on Mexican drug cartels while protecting citizens. It’s an excellent analogy for explaining why there’s no such thing as two tiers of crypto.

People’s intuition is to treat breaking encryption as linear, that it’s just a matter of trying a little bit harder to break it. You see this in TVs and movies where the hacker just types twice as hard on the keyboard and bypass the encryptions.

But breaking crypto is in fact exponential. Twice as much effort is insignificant.

Take those export controlled 40-bit keys mentioned above. People imagine that 80-bit keys are twice as secure. That’s not true, they are a trillion times more secure. A key that’s twice as secure is 41-bits — each additional bit on a key doubles the number of possible combinations an adversary would have to try in order to crack it. 10 extra bits is a thousand times, 20 bits a million times, 40 bits a million million (trillion) times.

Let’s do some math. A popular hobbyist computer right now is the $35 Raspberry Pi. Let’s compare that to the power of a full $1000 desktop computer, and to the NSA buying a million desktop computers with a billion dollars. What size keys can each crack? You’d think that a billion dollars somehow grants near infinite powers vs. the RPi, but it doesn’t. A factor of 10 million means adding 23 bits to the length of the key that can be cracked.

This is shown the graph below. The y-axis is the number of nanoseconds it takes to crack a key, the x-axis is key length. As you see, this isn’t a linear graph where difficult slowly rises as keys get longer. Instead, it’s an exponential graph, where as keys get longer, the time it takes to crack them goes from nearly zero to nearly infinitely. In other words, because of exponential growth, keys are largely either easily cracked or impossible to crack, with only a fine line between the two extremes.

An RPi can crack any encryption key smaller than 45-bits almost instantly. The NSA, with a billion dollars worth of computers, still can’t crack 70-bit keys. Even if you were to try to create a key somewhere in the middle, such as 64-bits, it still wouldn’t work, because a hacker could still buy a day’s worth of cloud computing, temporarily creating an NSA-level computer, to crack that one key.

The government’s “export grade” crypto is thus nonsense, as 40-bit encryption means essentially no encryption. Conversely, 128-bit encryption means perfect encryption. The TV cliche of showing a hacker working harder to bypass the encryptions is not reality. If the encryption works, it works against all adversaries. If it doesn’t work, then it doesn’t work against any adversary. Crypto is either broken by your neighbor’s teenager who bought a computer from their babysitting money, or is perfect defense against the NSA’s billions.

In fact, military grade means worse encryption. Military equipment takes years negotiating purchasing contracts and then must last in the field for decades. It’s woefully out-of-date. In contrast, your iPhone contains the latest developments in crypto. The picture of your pet you just texted your friend uses crypto vastly better than what’s protecting our launch codes. That military needs better crypto is a fallacy.

This picture is from a nuclear missile silo, where they still use floppy disks.

This picture is of Fritzi, sent by my sister via Apple’s iMessage, which uses the latest advances in end-to-end encryption.

Barr repeats this fallacy in another way, talking about “customized encryption used by large business enterprises to protect their operations”. Again, that’s not a thing. Customized encryption is always worst encryption. The best encryption is the standard, non-customized encryption that consumers use. When you customize it, you start making mistakes.

The government isn’t calling for 40-bit export crypto anymore, but is calling for other weaknesses. Therefore, this discussion of math is only an analogy.

But the underlying concept still applies. Cryptographers don’t know how slightly weak crypto that’s only 99% secure instead of 100% secure, because any small weakness inevitably gets hacked into an enormous gaping hole.

Barr derides our concerns as being only “theory”, but it’s theory backed up my a lot of experience. It’s like asking your doctor to prove that losing weight and exercising will improve your health. Our experience from cryptography is that there is no such things as a little bit weak. We know of no way to implement the government’s backdoor in such a way that won’t have grave impacts. I might not be able to immediately point out the holes in whatever scheme you have concocted, but that doesn’t mean I believe your backdoor scheme doesn’t have weaknesses. My decades of experience tells me it’s only a matter of time before those weaknesses explode into gapping holes that hackers exploit.

Barr doesn’t care about whether backdoors are technically feasible. His argument is ultimately premised on the idea that citizen’s don’t have a fundamental right to protect themselves, but that instead they should rely upon the government to protect them. They should not take the law into their own hands. That backdoors weaken their ability to secure themselves is therefore not a problem. This is bad: we should have the right to protect ourselves, and crypto backdoors hugely impacts that right.

Mounting crime

Barr claims the costs aren’t abstract, but measured in real mounting victims. This is an excellent argument to pay attention to. Because the real numbers don’t support him.

The fact is that the number of crimes perpetrated is not going up. The rate of solving crimes and prosecuting perpetrators is not going down. If there were a wave of unsolved crime, then even I admit we’d have to seriously start talking about this issue. I might not support backdoors, because (as described above) they aren’t technically feasible without violation human rights. But I’d be much more motivated to look for alternatives.

But there is no wave of unsolved crime. All that’s mounting is the number of locked phones in their evidence rooms. They are solving crimes because they have all the same old crime fighting abilities available to them that don’t require unlocking phones.

Crime rates have been falling as strong crypto has increased.

The “clearance rate” rate is not changing, due to strong crypto or any other reason:

By Barr’s own arguments, then, crypto backdoors aren’t justified.

So if the issue isn’t “crime”, what it is? My guess is that the answer is “power”. They have evidence rooms full of phones without the power to decrypt them. That makes them unhappy.

Before we accept the government’s call for more dystopic police power, we should demand that they prove that encryption is actually leading to more crime. This should be based on statistics, not anecdotes like Mexican drug cartels or kidnapped girls, arguments designed to appeal to emotion not logic.

China, Russia, and Jefferson

The argument of balance is often described by “right to privacy” balanced with “right to safety/security”. I don’t think this is correct. I don’t think people care that much about privacy. After all, they readily give up privacy to the Facebook, the Google, and the other companies. It seems reasonable that they should be just as ready to give up privacy in exchange for additional security provided by law enforcement to protect us against criminals.

Instead, the balance people care about is the abuse of power by the government. The balance is between security provided by the government vs. security threats coming from the government. It’s a balance of security vs. security.

Another way of looking at it is that privacy isn’t a monolithic, there are different kinds of privacy. I don’t care (much) if government employees spy on me while I’m naked in the shower, as there’s not much they can do to abuse this information. I care a lot about them being able to secretly turn on the microphone in my living room and eavesdrop on my private conversation, because I know that’s exactly the sort of power government’s are known to abuse.

A quote often attributed to Edward Snowden is:

“Saying you don’t care about privacy because you have nothing to hide is like saying you don’t care about free speech because you have nothing to say.”

Is this comparison really valid? Or is it a false equivalence?

China and Russia show us the answer to this question. Both have cracked down on encrypted communications. China mandates devices have a backdoor whereby the government can access anything on a phone, encrypted or not. Russia has cracked down on Telegram, an encrypted messaging app popular in Russia. Both cases have been motivated by their desire to crack down on dissidents.

We therefore see that these are equivalent in Snowden’s quote. For dissidents in China and Russia, it’s every much as important for them to keep some communications private (i.e. encrypted) as it is to allow other communications to be public (i.e. free speech).

Thus, the debate isn’t whether the U.S. government should have this power, but whether governments in general should have this power. If it were only the U.S., we might trust them with backdoors, because the U.S. is a free country and not a totalitarian state. But that’s the same as saying that we trust our current government to regulate speech because they’d never restrict political speech the way they do in China and Russia.

We have a free society because government has these restrictions. If you remove restrictions because you trust government, that’ll only lead to a government that abuses its power.

It’s obvious the U.S. government abuses any power we give it. Take “civil asset forfeiture” as an example. Sure, it seems reasonable that if convicted of a crime, you should have to forfeit the proceeds of the crime. But it’s gotten out of control. If the police pull you over, and see that you have $5000, they can just seize it, without convicting you of a crime, without even charging you with a crime. The Supreme Court allows it under weird legal fictions, pretending they aren’t depriving you of property without due process of law. It’s not you who is charged with a crime, but the object they are confiscating, which is why you see odd court cases with names like “United States v. $124,700 in U.S. Currency“.

Or take the “border search exception”. Obviously, if you are going to control the borders, it’s reasonable to search people’s belongings for smuggled goods, searches that would be unreasonable when not done on the border. However, this power is now abused for things like searching your phone. This is unreasonable, because there’s nothing you’d smuggle on the phone when crossing the border that you couldn’t more easily transfer over the Internet. Yet, the Supreme Court allows it under various legal fictions.

Or take the Snowden revelations about the government grabbing all phone call records of the past 7 years without a warrant. Or the fact they grab all your financial records the same way, including every credit card purchase. Or how they used to grab all your cell phone GPS records until finally the courts added a warrant requirement in the recent Carpenter decision (though only in limited cases).

Or take the drug war in general. Barr mentions drug traffickers numerous times to justify himself. But the war on drugs has been an enormous abuse by our government. Because of the drug war, our incarceration rate has exploded. At 655 per 100,000, we are the highest in the world. In European countries, that number is around 100. In Japan, it’s 41. I’m glad Barr focuses on drug traffickers — the war on drugs has resulted in obvious government abuse of power. Drug crimes aren’t a reason to give government more power, but a reason to give them less. That’s why our country is legalizing marijuana right now, because it’s less harmful than alcohol or tobacco and therefore stupid to keep illegal, while at the same time fostering abusive government power.

Barr cites several Supreme Court cases to justify his legal position. But “legal” doesn’t mean “right”. Our entire country is based upon legal actions by the English government that the colonists decided were nonetheless illegitimate. Just because the Supreme Court allows something as “constitutional” doesn’t mean it’s not one of those abuses and usurpations designed to reduce citizens under absolute despotism. The fact that that the Supreme Court seems unable or unwilling to curb current abuses of our government, especially with regards to technological change, isn’t an argument that crypto backdoors are justified (as Barr argues), but an argument why we need to vigorously oppose them.

The Ninth Amendment says that “the enumeration of certain rights show not be construed to deny or disparage others retained by the people”. But that’s exactly what Barr did in his speech, talking about “Supreme Court taking steps to ensure that advances in technology do not unduly tip the scales against public safety”. The Supreme Court can do no such thing. The right to encrypted communications, or the right to run whatever software you want on your computer, is not enumerated in the Constitution and thus the Supreme Court can never consider them. It can never balance these rights with against public safety. But they are important rights nonetheless.

Yes, it’s reasonable to balance privacy and security — but we aren’t talking about privacy in general. The changes in technology has demonstrated that encrypted communications is it’s own thing, separate from our other privacy concerns.

Balance revisited

So now let’s go back and revisit what sounds like a reasonable argument that the Fourth Amendment balances privacy and security.

There is no evidence of an imbalance. Crime rates aren’t increasing, clearance rates (of solving crimes) aren’t decreasing. Far from “going dark”, we live in a Golden Age of Surveillance, were police are able to grab our GPS records, credit card receipts, phone metadata, and other records, often without a warrant. It’s impractical to travel anonymously in the United States, as the government gets a copy of plane and train records, and is increasingly blanketing the country with license plate readers to track our cars. If a rebalancing of the “privacy vs. security” equation is needed, it’s in favor of privacy.

But we aren’t talking about that balance. We are instead balancing “security vs. security”. It has become obvious that privacy of security communications is a wholly separate concern from other privacy issues. Even though we rely upon government to provide for public safety, we are in danger from governments that abuse their power to repress citizens. It is every much as important for political dissidents that we protect private communications (with encryption) as we protect their right to public communications (free speech).

Thirdly, we have purely technical problems. Cryptographers tell us, convincingly, that there’s no such thing in cryptography as a difference between consumer-grade security and military-grade security. Any backdoor in security for law enforcement compromises the ability of citizens to protect themselves. Similarly, it’s not about regulating the products/services big corporations like AT&T or Facebook put in our hands. Instead, it’s about regulating the software we ourselves choose to install on our devices. There is no solution to Barr’s scenarios that doesn’t involve outlawing such software, removing the right of citizens to install their own software.

Censorship vs. the memes

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/06/censorship-vs-memes.html

The most annoying thing in any conversation is when people drop a meme bomb, some simple concept they’ve heard elsewhere in a nice package that they really haven’t thought through, which takes time and nuance to rebut. These memes are often bankrupt of any meaning.

When discussing censorship, which is wildly popular these days, people keep repeating these same memes to justify it:
  • you can’t yell fire in a crowded movie theater
  • but this speech is harmful
  • Karl Popper’s Paradox of Tolerance
  • censorship/free-speech don’t apply to private organizations
  • Twitter blocks and free speech
This post takes some time to discuss these memes, so I can refer back to it later, instead of repeating the argument every time some new person repeats the same old meme.


You can’t yell fire in a crowded movie theater


This phrase was first used in the Supreme Court decision Schenck v. United States to justify outlawing protests against the draft. Unless you also believe the government can jail you for protesting the draft, then the phrase is bankrupt of all meaning.

In other words, how can it be used to justify the thing you are trying to censor and yet be an invalid justification for censoring those things (like draft protests) you don’t want censored?

What this phrase actually means is that because it’s okay to suppress one type of speech, it justifies censoring any speech you want. Which means all censorship is valid. If that’s what you believe, just come out and say “all censorship is valid”.

But this speech is harmful or invalid

That’s what everyone says. In the history of censorship, nobody has ever wanted to censor good speech, only speech they claimed was objectively bad, invalid, unreasonable, malicious, or otherwise harmful

It’s just that everybody has different definitions of what, actually is bad, harmful, or invalid. It’s like the movie theater quote. For example, China’s constitution proclaims freedom of speech, yet the government blocks all mention of the Tienanmen Square massacre because it’s harmful. It’s “Great Firewall of China” is famous for blocking most of the content of the Internet that the government claims harms its citizens.

At least in case of movie theaters, the harm of shouting “fire” is immediate and direct. In all these other cases, the harm is many steps removed. Many want to censor anti-vaxxers, because their speech kills children. But the speech doesn’t, the virus does. By extension, those not getting vaccinations may harm people by getting infected and passing the disease on. But the speech itself is many steps removed from this, and there’s plenty of opportunity to counter this bad speech with good speech.

Thus, this argument becomes that all speech can be censored, because I can also argue that some harm will come from it.

Karl Popper’s Paradox of Tolerance

This is just a logical fallacy, using different definitions of “tolerance”. The word means “putting up with those who disagree with you”. The “paradox” comes from allowing people free-speech who want to restrict free-speech.

But people are shifting the definition of “tolerance” to refer to white-supremacists, homophobes, and misogynists. That’s also intolerance, of people different than you, but it’s not the same intolerance Popper is talking about. It’s not a paradox allowing the free-speech of homophobes, because they aren’t trying to restrict anybody else’s free-speech.

Today’s white-supremacists in the United States don’t oppose free-speech, quite the opposite. They champion free-speech, and complain the most about restrictions on their speech. Popper’s Paradox doesn’t apply to them. Sure, the old Nazi’s in Germany also restricted free-speech, but that’s distinct from their racism, and not what modern neo-Nazi’s are championing.

Ironically, the intolerant people Popper refers to in his Paradox are precisely the ones quoting it with the goal of restricting speech. Sure, you may be tolerant in every other respect (foreigners, other races, other religions, gays, etc.), but if you want to censor free-speech, you are intolerant of people who disagree with you. Popper wasn’t an advocate of censorship, his paradox wasn’t an excuse to censor people. He believed that “diversity of opinions must never be interfered with”.

Censorship doesn’t apply to private organizations

Free speech rights, as enumerated by the First Amendment, only apply to government. Therefore, it’s wrong to claim the First Amendment protects your Twitter or Facebook post, because those are private organizations. The First Amendment doesn’t apply to private organizations. Indeed, the First Amendment means that government can’t force Twitter or Facebook to stop censoring you.

But “free speech” doesn’t always mean “First Amendment rights”. Censorship by private organizations is still objectionable on “free speech” grounds. Private censorship by social media isn’t suddenly acceptable simply because government isn’t involved.

Our rights derive from underlying values of tolerance and pluralism. We value the fact that even those who disagree with us can speak freely. The word “censorship” applies both to government and private organizations, because both can impact those values, both can restrict our ability to speak.

Private organizations can moderate content without it being “censorship”. On the same page where Wikipedia states that it won’t censor even “exceedingly objectionable/offensive” content, it also says:

Wikipedia is free and open, but restricts both freedom and openness where they interfere with creating an encyclopedia. 

In other words, it will delete content that doesn’t fit its goals of creating an encyclopedia, but won’t delete good encyclopedic content just because it’s objectionable. The first isn’t censorship, the second is. It’s not “censorship” when the private organization is trying to meet its goals, whatever they are. It’s “censorship” when outsiders pressure/coerce the organization into removing content they object to that otherwise meets the organization’s goals.

Another way of describing the difference is the recent demonetization of Steven Crowder’s channel by YouTube. People claim YouTube should’ve acted earlier, but didn’t because they are greedy. This argument demonstrates their intolerance. They aren’t arguing that YouTube should remove content in order to achieve its goals of making money. They are arguing that YouTube should remove content they object to, despite hurting the goal of making money. The first wouldn’t be censorship, the second most definitely is.

So let’s say you are a podcaster. Certainly, don’t invite somebody like Crowder on your show, for whatever reason you want. That’s not censorship. Let’s say you do invite him on your show, and then people complain. That’s also not censorship, because people should speak out against things they don’t like. But now let’s say that people pressure/coerce you into removing Crowder, who aren’t listeners to your show anyway, just because they don’t want anybody to hear what Crowder has to say. That’s censorship.

That’s what happened recently with Mark Hurd, a congressman from Texas who has sponsored cybersecurity legislation, who was invited to speak at Black Hat, a cybersecurity conference. Many people who disliked his non-cybersecurity politics objected and pressured Black Hat into dis-inviting him. That’s censorship. It’s one side who refuse to tolerate a politician of the opposing side.

All these arguments about public vs. private censorship are repeats of those made for decades. You can see them here in this TV show (WKRP in Cincinati) about Christian groups trying to censor obscene song lyrics, which was a big thing in the 1980s.

This section has so far been about social media, but the same applies to private individuals. When terrorists (private individuals) killed half the staff at Charlie Hebdo for making cartoons featuring Muhamed, everyone agreed this was a freedom of speech issue. When South Park was censored due to threats from Islamic terrorists, people likewise claimed it was a free speech issue.

In Russia, the police rarely arrests journalists. Instead, youth groups and thugs beat them up. Russia has one of the worst track records on freedom of speech, but it’s mostly private individuals who are responsible, not their government.

These days in America, people justify Antifa’s militancy, which tries to restrict the free speech of those they label as “fascists”, because it’s not government restrictions. It’s just private individuals attacking other private individuals. It’s no more justified than any of these other violence attacks on speech.

Twitter blocks and free speech

The previous parts are old memes. There’s a new meme, that somehow Twitter “blocks” are related to free-speech.

That’s nonsense. If I block you on Twitter, then the only speech I’m preventing you from seeing is my own. It also prevents me from seeing some (but not all) stuff you post, but again, the only one affected by this block is me. It doesn’t stop others from seeing your content. Censorship is about stopping others from hearing speech that I object to. If there’s no others involved, it’s not censorship. In particular, while you are free to speak anything you want, I’m likewise free to ignore you.

Sure, there are separate concerns when the President simultaneously uses his Twitter account for official business and also blocks people. That’s a can of worms that I don’t want to get into. But it doesn’t apply to us individuals.

Conclusion

The pro-censorship arguments people are making today are the same arguments people have been making for thousands of years, such as when ancient Rome had the office of “censor” who (among other duties) was tasked with restricting harmful speech. Those arguing for censorship of speech they don’t like believe that somehow their arguments are different. They aren’t. It’s the same bankrupt memes made over and over.

Some Raspberry Pi compatible computers

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/05/some-raspberry-pi-compatible-computers.html

I noticed this spreadsheet over at r/raspberry_pi reddit. I thought I’d write up some additional notes.

https://docs.google.com/spreadsheets/d/1jWMaK-26EEAKMhmp6SLhjScWW2WKH4eKD-93hjpmm_s/edit#gid=0


Consider the Upboard, an x86 computer in the Raspberry Pi form factor for $99. When you include storage, power supplies, heatsinks, cases, and so on, it’s actually pretty competitive. It’s not ARM, so many things built for the Raspberry Pi won’t necessarily work. But on the other hand, most of the software built for the Raspberry Pi was originally developed for x86 anyway, so sometimes it’ll work better.

Consider the quasi-RPi boards that support the same GPIO headers, but in a form factor that’s not the same as a RPi. A good example would be the ODroid-N2. These aren’t listed in the above spreadsheet, but there’s a tone of them. There’s only two Nano Pi’s listed in the spreadsheet having the same form factor as the RPi, but there’s around 20 different actual boards with all sorts of different form factors and capabilities.

Consider the heatsink, which can make a big difference in the performance and stability of the board. You can put a small heatsink on any board, but you really need larger heatsinks and possibly fans. Some boards, like the ODroid-C2, come with a nice large heatsink. Other boards have a custom designed large heatsink you can purchase along with the board for around $10. The Raspberry Pi, of course, has numerous third party heatsinks available. Whether or not there’s a nice large heatsink available is an important buying criteria. That spreadsheet should have a column for “Large Heatsink”, whether one is “Integrated” or “Available”.

Consider power consumption and heat dissipation as a buying criteria. Uniquely among the competing devices, the Raspberry Pi itself uses a CPU fabbed on a 40nm process, whereas most of the competitors use 28nm or even 14nm. That means it consumes more power and produces more heat than any of it’s competitors, by a large margin. The Intel Atom CPU mentioned above is actually one of the most power efficient, being fabbed on a 14nm process. Ideally, that spreadsheet would have tow additional columns for power consumption (and hence heat production) at “Idle” and “Load”.

You shouldn’t really care about CPU speed. But if you are, there basically two classes of speed: in-order and out-of-order. For the same GHz, out-of-order CPUs are roughly twice as fast as in-order. The Cortex A5, A7, and A53 are in-order. The Cortex A17, A72, and A73 (and Intel Atom) are out-of-order. The spreadsheet also lists some NXP i.MX series processors, but those are actually ARM Cortex designs. I don’t know which, though.

The spreadsheet lists memory, like LPDDR3 or DDR4, but it’s unclear as to speed. There’s two things that determine speed, the number of MHz/GHz and the width, typically either 32-bits or 64-bits. By “64-bits” we can mean a single channel that’s 64-bits wide, as in the case of the Intel Atom processors, or two channels that are each 32-bits wide, as in the case of some ARM processors. The Raspberry Pi has an incredibly anemic 32-bit 400-MHz memory, whereas some competitors have 64-bit 1600-MHz memory, or roughly 8 times the speed. For CPU-bound tasks, this isn’t so important, but a lot of tasks are in fact bound by memory speed.

As for GPUs, most are not OpenCL programmable, but some are. The VideoCore and Mali 4xx (Utgard) GPUs are not programmable. The Mali Txxx (Midgard) are programmable. The “MP2” suffix means two GPU processors, whereas “MP4” means four GPU processors. For a lot of tasks, such as “SDR” (software defined radio), offloading onto GPU simultaneously reduces power consumption (by a lot) while increasing speed (usually 2 to 4 times).

Micro-USB is horrible for a power supply, which is why most of the competing devices either have an option for “barrel” connector or make that required. In other words, don’t think to yourself that micro-USB is adequate just because that’s the only option on the Raspberry Pi, that barrel connectors are more common among the competitors should convince you that micro-USB isn’t adequate. You can actually buy USB cables with barrel connectors cheaply from Amzon.com, so it doesn’t make much of a difference. I mention this because I hook mine up to the “multiport chargers” so don’t want a separate wall-wart power supply that you normally would need if using the barrel connector — I want just the USB cable instead.

Likewise, most of the competing devices offer eMMC built-in or as an option. This should convince you that booting micro-SD cards is not adequate. There is no way to turn off the Raspberry Pi without risking corrupting the SD card. It’s also a lot faster, sometimes 10x faster. However, I use a $10 USB-to-SATA connector and a $20 SATA drive on my RPi to boot the operating system through the USB port, so it’s not like this deficiency can’t be gotten around.

The spreadsheet lists which operating systems the device is compatible with. In this, the Raspberry Pi shines, compatible with almost any of them. However, lurking underneath this list is which kernel version the operating systems might use. A good example is the ODroid-C2, which has a newer distribution of Ubuntu 18 userland utilities, but is stuck on the ancient and crusty 3.19 version fo the kernel from February 2015 — over four years old.

The spreadsheet lists which devices include support for infrared. This is presumably to indicate how well it can be integrated into a home entertainment setup.  However, it should also list which ones support the CEC channel on HDMI. This allows the various devices to control and be controlled from each other. If you want to change the channel with your RPi processing voice commands via microphone, then you want CEC supporter rather than infrared support. The RPi has good CEC support, but I don’t know about the other devices.

Conclusion

Because there is so much support for the Raspberry Pi, it’s hard not to choose that platform. Things just tend to work. If you are doing maker projects, then get an RPi Model B+.

But it’s deficient in almost every way to its competitors, especially with the amount of power/heat it consumes.

For home server needs, I’m using a ROCK64 at the moment. It consumes much less power, has real gigabit Ethernet, costs only $25, and has USB 3.0 for much faster access to SSDs. It doesn’t have WiFi, but if it wanted that in a server, I’d probably go with the Pine H54-B.

Your threat model is wrong

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/05/your-threat-model-is-wrong.html

Several subjects have come up with the past week that all come down to the same thing: your threat model is wrong. Instead of addressing the the threat that exists, you’ve morphed the threat into something else that you’d rather deal with, or which is easier to understand.

Phishing

An example is this question that misunderstands the threat of “phishing”:

The (wrong) threat model is here is that phishing is an email that smart users with training can identify and avoid. This isn’t true.
Good phishing messages are indistinguishable from legitimate messages. Said another way, a lot of legitimate messages are in fact phishing messages, such as when HR sends out a message saying “log into this website with your organization username/password”.
Yes, it’s amazing how easily stupid employees are tricked by the most obvious of phishing messages, and you want to point and laugh at them. But frankly, you want the idiot employees doing this. The more obvious phishing attempts are the least harmful and a good test of the rest of your security — which should be based on the assumption that users will frequently fall for phishing.
In other words, if you paid attention to the threat model, you’d be mitigating the threat in other ways and not even bother training employees. You’d be firing HR idiots for phishing employees, not punishing employees for getting tricked. Your systems would be resilient against successful phishes, such as using two-factor authentication.
IoT security

After the Mirai worm, government types pushed for laws to secure IoT devices, as billions of insecure devices like TVs, cars, security cameras, and toasters are added to the Internet. Everyone is afraid of the next Mirai-type worm. For example, they are pushing for devices to be auto-updated.
But auto-updates are a bigger threat than worms.
Since Mirai, roughly 10-billion new IoT devices have been added to the Internet, yet there hasn’t been a Mirai-sized worm. Why is that? After 10-billion new IoT devices, it’s still Windows and not IoT that is the main problem.
The answer is that number, 10-billion. Internet worms work by guessing IPv4 addresses, of which there are only 4-billion. You can’t have 10-billion new devices on the public IPv4 addresses because there simply aren’t enough addresses. Instead, those 10-billion devices are almost entirely being put on private networks behind “NATs” which act as a firewall. When you look at the exposure of IoT to the public Internet, such as port 23 used by Mirai, it’s going down, not up.
NATs suck as a security device, but they are still proof against worms. With everything behind NAT, worms are no longer a thing. Sure, a hacker may phish a desktop behind a firewall, and thus be able to mass infect an entire corporation, but that’s not an Internet-ending worm event, just very annoying for the corporation. Yes, notPetya spread to partner organizations, but that was through multihomed Windows hosts, often connected via VPN, and not a path IoT can take.
In contrast, when a vendor gets hacked and pushes out an auto-update to millions of devices, that is an Internet-ending mass infection event. We saw that with notPetya that was launched as an autoupdate. We’ve seen that recently with Asus, which pushed out mass malware, though the malicious actor was apparently on focused on specific targets rather than exploiting that infection for mass destruction.
Nicholas Taleb has books on “Black Swan” events and “Antifragile” systems. This example is exactly that sort of thing. Yes, non-updated IoT devices will cause a continuous stream of low-grade problems. However, centralized auto-updates risk seldom, but massive, problems. Non-updated IoT systems lead to resilient networks, auto-update patches lead to fragile networks.
Anyway, this is just the start of your “wrong threat model”. The main security weaknesses that cause 99% of the problems are services exposed to the public Internet and users exposed to the public Internet. IoT has neither of these, and thus, billions added to the Internet are not the problem you imagine.
My threat model for Internet-ending events are three:
  • Windows vulns
  • something else exposed to the public Internet
  • automatic updates of a popular product
IoT isn’t in this list.
Catastrophic ransomware infections
There are two types of ransomware infections:
  • Low grade infection of individual desktops, probably from phishing, which the IT department regularly cleans up with out too much problem.
  • Crippling infections of the entire network that spreads via Windows networking credentials (often using ‘psexec’).
I mention this because of a NYTimes reporter who has created a third type that’s blamed on the leaked tool “EternalBlue” from the NSA. While I can’t confirm that wasn’t the case in Baltimore, it hasn’t been the case in any other major ransomware attack. In particular, it wasn’t the case in Merk and FedEx.
Yes, EternalBlue was used in those two attacks, but had it been EternalBlue alone, it would’ve been the first example of a few unpatched systems that needed to be fixed. What caused the $billion in damage was spreading via Windows credentials.
A couple weeks ago, Microsoft patched a vulnerability in their Remote Desktop feature that they say is wormable. There are right now more than 900,000 machine exposed on the Internet that can be exploited. A new worm like notPetya is likely on the way. The correct response to this threat model isn’t “patch your systems” it’s “fix your Windows credentials”. Segment your active directory domains and trust permissions so when the worm gets admin rights in one domain it can’t spread to the others. Yes, also patch your systems, but a few will remain unpatched, and when infected, they shouldn’t spread to patched systems with psexec.
I’ve looked at lots of crippling ransomware attacks, including notPetya. What makes them crippling is never anything but this problem of Windows credentials and ‘psexec’ style lateral movement. This is your threat model.
Conclusion
The problem with cybersecurity is that you aren’t paying attention to your threat model. An important step in addressing both phishing and ransomware worms is taking local admin rights away from users, yet many (most?) organizations are unwilling to do that. So they pretend the threat is elsewhere, such as blaming users for falling victim to phishing rather than blaming themselves for not making systems resilient to successful phishing.

Almost One Million Vulnerable to BlueKeep Vuln (CVE-2019-0708)

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/05/almost-one-million-vulnerable-to.html

Microsoft announced a vulnerability in it’s “Remote Desktop” product that can lead to robust, wormable exploits. I scanned the Internet to assess the danger. I find nearly 1-million devices on the public Internet that are vulnerable to the bug. That means when the worm hits, it’ll likely compromise those million devices. This will likely lead to an event as damaging as WannaCry and notPetya from 2017 — potentially worse, as hackers have since honed their skills exploiting these things for ransomware and other nastiness.

To scan the Internet, I started with masscan, my Internet-scale port scanner, looking for port 3389, the one used by Remote Desktop. This takes a couple hours, and lists all the devices running Remote Desktop — in theory.
This returned 7,629,102 results (over 7-million). However, there is a lot of junk out there that’ll respond on this port. Only about half are actually Remote Desktop.
Masscan only finds the open ports, but is not complex enough to check for the vulnerability. Remote Desktop is a complicated protocol. A project was posted that could connect to an address and test it, to see if it was patched or vulnerable. I took that project and optimized it a bit, rdpscan, then used it to scan the results from masscan. It’s a thousand times slower, but it’s only scanning the results from masscan instead of the entire Internet.
The table of results is as follows:
1447579  UNKNOWN – receive timeout
1414793  SAFE – Target appears patched
1294719  UNKNOWN – connection reset by peer
1235448  SAFE – CredSSP/NLA required
 923671  VULNERABLE — got appid
 651545  UNKNOWN – FIN received
 438480  UNKNOWN – connect timeout
 105721  UNKNOWN – connect failed 9
  82836  SAFE – not RDP but HTTP
  24833  UNKNOWN – connection reset on connect
   3098  UNKNOWN – network error
   2576  UNKNOWN – connection terminated
The various UNKNOWN things fail for various reasons. A lot of them are because the protocol isn’t actually Remote Desktop and respond weirdly when we try to talk Remote Desktop. A lot of others are Windows machines, sometimes vulnerable and sometimes not, but for some reason return errors sometimes.
The important results are those marked VULNERABLE. There are 923,671 vulnerable machines in this result. That means we’ve confirmed the vulnerability really does exist, though it’s possible a small number of these are “honeypots” deliberately pretending to be vulnerable in order to monitor hacker activity on the Internet.
The next result are those marked SAFE due to probably being “pached”. Actually, it doesn’t necessarily mean they are patched Windows boxes. They could instead be non-Windows systems that appear the same as patched Windows boxes. But either way, they are safe from this vulnerability. There are 1,414,793 of them.
The next result to look at are those marked SAFE due to CredSSP/NLA failures, of which there are 1,235,448. This doesn’t mean they are patched, but only that we can’t exploit them. They require “network level authentication” first before we can talk Remote Desktop to them. That means we can’t test whether they are patched or vulnerable — but neither can the hackers. They may still be exploitable via an insider threat who knows a valid username/password, but they aren’t exploitable by anonymous hackers or worms.
The next category is marked as SAFE because they aren’t Remote Desktop at all, but HTTP servers. In other words, in response to our Remote Desktop request they send an HTTP response. There are 82,836 of these.
Thus, out of 7.6-million devices that respond to port 3389, we find 3.5-million that reliably talk the Remote Desktop protocol, of which 0.9-million are vulnerable, and the rest are not.
But, since a lot of those “unknowns” are due to transient network errors, then in theory I should be able to rescan them and get some more results. I did this and go the following update:
  28182  SAFE – Target appears patched
  19991  VULNERABLE — got appid
  17560  SAFE – CredSSP/NLA required
    695  SAFE – not RDP but HTTP
A third rescan got the following results:
   9838  SAFE – Target appears patched
   7084  SAFE – CredSSP/NLA required
   6041  VULNERABLE — got appid
   2963  UNKNOWN – network error
     45  SAFE – not RDP but HTTP
Some of these rescans are likely overcoming transient errors that preventing getting results the first time. However, others are likely ISPs with Windows machines moving around from one IP address to another, so that continued rescans are going to get distorted results rather than cleaning up the previous results.
The upshot is that these tests confirm that roughly 950,000 machines are on the public Internet that are vulnerable to this bug. Hackers are likely to figure out a robust exploit in the next month or two and cause havoc with these machines.
There are two things you should do to guard yourself. The first is to apply Microsoft’s patches, including old Windows XP, Windows Vista, and Windows 7 desktops and servers. 
More importantly, for large organizations, is to fix their psexec problem that allows such things to spread via normal user networking. You may have only one old WinXP machine that’s vulnerable, that you don’t care if it gets infected with ransomware. But, that machine may have a Domain Admin logged in, so that when the worm breaks in, it grab those credentials and uses them to log onto the Domain Controller. Then, from the Domain Controller, the worm sends a copy of itself to all the desktop and servers in the organization, using those credentials instead of the vuln. This is what happened with notPetya: the actual vulnerability wasn’t the problem, it was psexec that was the problem.
For patching systems, you have to find them on the network. My rdpscan tool mentioned above is good for scanning small networks. For large networks, you’ll probably want to do the same masscan/rdpscan combination that I used to scan the entire Internet. On GitHub, rdpscan has precompiled programs that work on the command-line, but the source is there for you to compile it yourself, in case you don’t trust I’m tryin to infect you with a virus.

A lesson in journalism vs. cybersecurity

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/05/a-lesson-in-journalism-vs-cybersecurity.html

A recent NYTimes article blaming the NSA for a ransomware attack on Baltimore is typical bad journalism. It’s an op-ed masquerading as a news article. It cites many to support the conclusion the NSA is to be blamed, but only a single quote, from the NSA director, from the opposing side. Yet many experts oppose this conclusion, such as @dave_maynor, @beauwoods, @daveaitel, @riskybusiness, @shpantzer, @todb, @hrbrmstr , … It’s not as if these people are hard to find, it’s that the story’s authors didn’t look.


The main reason experts disagree is that the NSA’s Eternalblue isn’t actually responsible for most ransomware infections. It’s almost never used to start the initial infection — that’s almost always phishing or website vulns. Once inside, it’s almost never used to spread laterally — that’s almost always done with windows networking and stolen credentials. Yes, ransomware increasingly includes Eternalblue as part of their arsenal of attacks, but this doesn’t mean Eternalblue is responsible for ransomware.

The NYTimes story takes extraordinary effort to jump around this fact, deliberately misleading the reader to conflate one with the other. A good example is this paragraph:

That link is a warning from last July about the “Emotet” ransomware and makes no mention of EternalBlue. Instead, the story is citing anonymous researchers claiming that EthernalBlue has been added to Emotet since after that DHS warning.

Who are these anonymous researchers? The NYTimes article doesn’t say. This is bad journalism. The principles of journalism are that you are supposed to attribute where you got such information, so that the reader can verify for themselves whether the information is true or false, or at least, credible.

And in this case, it’s probably false. The likely source for that claim is this article from Malwarebytes about Emotet. They have since retracted this claim, as the latest version of their article points out.

In any event, the NYTimes article claims that Emotet is now “relying” on the NSA’s EternalBlue to spread. That’s not the same thing as “using“, not even close. Yes, lots of ransomware has been updated to also use Eternalblue to spread. However, what ransomware is relying upon is still the Windows-networking/credential-stealing/psexec method. Because the actual source of this quote is anonymous, we the reader have no way of challenging what appears to be a gross exaggeration. The reader is lead to believe the NSA’s EternalBlue is primarily to blame for ransomware spread, rather than the truth that it’s only occasionally responsible.

Likewise, anonymous experts claim that without EternalBlue, “the damage would not have been so vast”:

Again, I want to know who those experts are, and whether this is a fair quote of what they said. What makes ransomware damage “vast” is almost entirely whether it can spread via Windows networking with admin privileges. For the most part, ransomware attacks are binary. They are either harmless, infecting a few desktop computers via a phishing attack, which IT cleans up without trouble. Or, the ransomware gains Doman Admin privileges, then spreads through the entire network via Windows-networking/psexec, which destroys the entire network as we saw in attacks like those in Baltimore and Atlanta.

Yes, it’s true, EternalBlue does make devastating attacks more likely. It’s not for nothing that hackers are including it in their malware. It’s certainly possible that EternalBlue was the thing responsible here, that without it, the “RobinHood” infection might not have spread to the Domain Controllers — and then to the rest of the network via psexec. But the article does not claim this. It’s not citing specific evidence of this fact that we can challenge, but is handwaving over the entire problem, talking in vague generalities that we can’t challenge.

Instead of blaming the NSA, the blame resides with the hackers themselves, or the city of Baltimore for irresponsible management. Yes, there’s good reason to heap some of the blame on the NSA for the WannaCry and notPetya attacks from two years ago, but it’s absurd blaming them now. Windows is a system that needs regular patches. Going two years without a patch is gross malfeasance that’s hard to lay at the NSA’s feet. If what experts believe is implausible, that Baltimore was indeed devastated by the NSA’s EternalBlue, then Baltimore has only themselves to blame for not patching for two years.

Had the NSA done the opposite thing, notified Microsoft of the vuln instead of exploiting it, then Microsoft would’ve released a patch for it. In such cases, hackers get around to writing exploits anyway. They likely would not have in quick time frame of WannaCry and notPetya that came only a couple months after EternalBlue was first disclosed. But they certainly would have within 2 years years. We’ve seen that with many other bugs where only patches were released. The “Conficker” bug in Windows is still being used 10 years after it the patch was released, and hacker’s independently figured out how to exploit it.

In other words, if EternalBlue is responsible for the Baltimore ransomware attack, it would’ve been regardless whether the NSA had weaponized an exploit for done the “responsible” thing and worked with Microsoft to patch it. After two years, exploits would exist either way.

Indeed, the exploit the hackers are including in their malware is often an independent creation and not that NSA’s EternalBlue at all. This work shows how much hackers can independently develop these things without help from the NSA. Again, the story seems to credit the NSA for their genius in making the vuln useful instead of “EternalBlueScreen”, but for malware/ransomware, it’s largely the community that has done this work.

All this expert discussion is, of course, is fairly technical. The point isn’t that a NYTimes reporter should know all this to begin with, only that they should get both sides of a story and actually interview experts that might have opposing opinions. They should not allow those supporting their claims to hide behind anonymity where technical details cannot be challenged. Otherwise, it’s an op-ed pushing an agenda and not a new article reporting the news.

tl;dr:

  • Ransomware devastation spreads via primarily through Windows/psexec, not exploits like EternalBlue. It’s things like psexec that are to blame, not the NSA.
  • Two years after Microsoft releasing a patch, exploits would exist regardless if the NSA had weaponized 0day or followed responsible disclosure, so they aren’t to blame for an exploit being used now.
  • There are experts all over the place with opposing views, that the article ignores them, and protects its own sources behind anonymity, means it’s not a journalistic “article” but an “op-ed” pushing an agenda.


By the way, may other experts have great comments I would love to repeat here, that would make such a story better. A good example is this one:

The NYTime story exaggerates EternalBlue as some sort of NSA nation-state superweapon that small organizations are powerless to defend against. The opposite is true. EternalBlue is no worse than any other 0day vuln that organization routinely defend against. It would not have affected you before two years ago had you followed Microsoft’s advice and disabled SMBv1. It would not have affected you since had you kept up with Microsoft’s patches. In any case, it’s not what’s causing the devastation we see from mass ransomware: that’s stolen credentials and things like psexec.
Lots of experts have good points that don’t align with the NYTime’s agenda. Too bad they have an agenda.


Dave Aitel also has some good comments https://cybersecpolitics.blogspot.com/2019/05/baltimore-is-not-eternalblue.html.

Programming languages infosec professionals should learn

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/04/programming-languages-infosec.html

Code is an essential skill of the infosec professional, but there are so many languages to choose from. What language should you learn? As a heavy coder, I thought I’d answer that question, or at least give some perspective.

The tl;dr is JavaScript. Whatever other language you learn, you’ll also need to learn JavaScript. It’s the language of browsers, Word macros, JSON, NodeJS server side, scripting on the command-line, and Electron apps. You’ll also need to a bit of bash and/or PowerShell scripting skills, SQL for database queries, and regex for extracting data from text files. Other languages are important as well, Python is very popular for example. Actively avoid C++ and PHP as they are obsolete.

Also tl;dr: whatever language you decide to learn, also learn how to use an IDE with visual debugging, rather than just a text editor. That probably means Visual Code from Microsoft. Also, whatever language you learn, stash your code at GitHub.

Let’s talk in general terms. Here are some types of languages.

  • Unavoidable. As mentioned above, familiarity with JavaScript, bash/Powershell, and SQL are unavoidable. If you are avoiding them, you are doing something wrong.
  • Small scripts. You need to learn at least one language for writing quick-and-dirty command-line scripts to automate tasks or process data. As a tool using animal, this is your basic tool. You are a monkey, this is the stick you use to knock down the banana. Good choices are JavaScript, Python, and Ruby. Some domain-specific languages can also work, like PHP and Lua. Those skilled in bash/PowerShell can do a surprising amount of “programming” tasks in those languages. Old timers use things like PERL or TCL. Sometimes the choice of which language to learn depends upon the vast libraries that come with the languages, especially Python and JavaScript libraries.
  • Development languages.  Those scripting languages have grown up into real programming languages, but for the most part, “software development” means languages designed for that task like C, C++, Java, C#, Rust, Go, or Swift.
  • Domain-specific languages. The language Lua is built into nmap, snortWireshark, and many games. Ruby is the language of Metasploit. Further afield, you may end up learning languages like R or Matlab. PHP is incredibly important for web development. Mobile apps may need Java, C#, Kotlin, Swift, or Objective-C.

As an experienced developer, here are my comments on the various languages, sorted in alphabetic order.


bash (and other Unix shells)

You have to learn some bash for dealing with the command-line. But it’s also a fairly completely programming language. Perusing the scripts in an average Linux distribution, especially some of the older ones, and you’ll find that bash makes up a substantial amount of what we think of as the Linux operating system. Actually, it’s called bash/Linux.

In the Unix world, there are lots of other related shells that aren’t bash, which have slightly different syntax. A good example is BusyBox which has “ash”. I mention this because my bash skills are rather poor partly because I originally learned “csh” and get my syntax variants confused.

As a hard-core developer, I end up just programming in JavaScript or even C rather than trying to create complex bash scripts. But you shouldn’t look down on complex bash scripts, because they can do great things. In particular, if you are a pentester, the shell is often the only language you’ll get when hacking into a system, sod good bash language skills are a must.

C

This is the development language I use the most, simply because I’m an old-time “systems” developer. What “systems programming” means is simply that you have manual control over memory, which gives you about 4x performance and better “scalability” (performance doesn’t degrade as much as problems get bigger). It’s the language of the operating system kernel, as well as many libraries within an operating system.

But if you don’t want manual control over memory, then you don’t want to use it. It’s lack of memory protection leading to security problems makes it almost obsolete.

C++

None of the benefits of modern languages like Rust, Java, and C#, but all of the problems of C. It’s an obsolete, legacy language to be avoided.

C#

This is Microsoft’s personal variant of Java designed to be better than Java. It’s an excellent development language, for command-line utilities, back-end services, applications on the desktop (even Linux), and mobile apps. If you are working in a Windows environment at all, it’s an excellent choice. If you can at all use C# instead of C++, do so. Also, in the Microsoft world, there is still a lot of VisualBasic. OMG avoid that like the plague that it is, burn in a fire burn burn burn, and use C# instead.

Go

Once a corporation reaches a certain size, it develops its own programming language. For Google, their most important language is Go.

Go is a fine language in general, but it’s main purpose is scalable network programs using goroutines. This is does asynchronous user-mode programming in a way that’s most convenient for the programmer. Since Google is all about scalable network services, Go is a perfect fit for them.

I do a lot of scalable network stuff in C, because I’m an oldtimer. If that’s something you’re interested in, you should probably choose Go over C.

Java

This gets a bad reputation because it was once designed for browsers, but has so many security flaws that it can’t be used in browsers. You still find in-browser apps that use Java, even in infosec products (like consoles), but it’s horrible for that. If you do this, you are bad and should feel bad.

But browsers aside, it’s a great development language for command-line utilities, back-end services, apps on desktops, and apps on phones. If you want to write an app that runs on macOS, Windows, and on a Raspberry Pi running Linux, then this is an excellent choice.

JavaScript

As mentioned above, you don’t have a choice but to learn this language. One of your basic skills is learning how to open Chrome developer tools and manipulate JavaScript on a web page.

So the question is whether you learn just enough familiarity with the language in order to hack around with it, or whether you spend the effort to really learn the language to do development or write scripts. I suggest that you should. For one thing, you’ll often encounter weird usages of JavaScript that you are unfamiliar with unless you seriously learn the language, such as JQuery style constructions that look nothing like what you might’ve originally learned the language for.

JavaScript has actually become a serious app development language with NodeJS and frameworks like Electron. If there is one language in the world that can do everything, from writing back end services (NodeJS), desktop applications (Electron), mobile apps (numerous frameworks), quick-and-dirty scripts (NodeJS again), and browser apps — it’s JavaScript. It’s the lingua franca of the world.

In addition, remember that your choice of scripting language will often be based on the underlying libraries available. For example, if writing TensorFlow machine-learning programs, you need those libraries available to the language. That’s why JavaScript is popular in the machine-learning field, because there’s so many libraries available for it.

BTW, “JSON” is also a language, or at least a data format, in its own right. So you have to learn that, too.

Lua

Lua is a language similar to JavaScript in many respects, with the big difference that arrays start with 1 instead of 0. The reason its exists is that it’s extremely easy to embed in other programs as their scripting language, is lightweight in terms of memory/CPU, and is ultra-portable almost everywhere.

Thus, you find it embedded in security tools like nmap, snort, and Wireshark. You also see it as the scripting language in popular games. Like Go, it has extremely efficient coroutines, so you see it in the nginx web server, “OpenResty”, for backend scripting of applications.

PERL

PERL was a popular scripting language in the early days of infosec (1990s), but has fallen behind the other languages in modern times. In terms of language design, it’s a somewhat better language than shell languages like bash, yet not quite as robust as real programming languages like JavaScript, Python, and Ruby.

In addition, it was the primary web scripting language for building apps on servers in the 1990s before PHP came along.

Thus, it’s a popular legacy language, but not a lot of new stuff is done in this language.


PHP

Surprisingly, PHP is a complete programming language. You can use it on the command-line to write scripts just like Python or JavaScript. You may have to learn it, because it’s still the most popular language for creating webapps, but learning it well means being able to write backend scripts in it as well.

However, for writing web apps, it’s obsolete. There are so many unavoidable security problems that you should avoid using it to create new apps. Also, scalability is still difficult. Use NodeJS, OpenResty/Lua, or Ruby instead.

PowerShell

The same comments above that apply to bash also apply to PowerShell, except that PowerShell is Windows.

Windows has two command-lines, the older CMD/BAT command-line, and the newer PowerShell. Anything complex uses PowerShell these days. For pentesting, there are lots of fairly complete tools for doing interesting things from the command-line written in the PowerShell programming language.

Thus, if Windows is in your field, and it almost certainly is, then PowerShell needs to be part of your toolkit.

Python

This has become one of the most popular languages, driven by universities which use it heavily as the teaching language for programming concepts. Anything academic, like machine learning, will have great libraries for Python.

A lot of hacker command-line tools are written in Python. Since such tools are often buggy and poorly documented, you’ll end up having to reading the code a lot to figure out what is going wrong. Learning to program in Python means being able to contribute to those tools.

I personally hate the language because of the schism between v2/v3, and having to constantly struggle with that. Every language has a problem with evolution and backwards compatibility, but this v2 vs v3 issue with Python seems particularly troublesome.

Also, Python is slow. That shouldn’t matter in this age of JITs everywhere and things like Webassembly, but somehow whenever you have an annoyingly slow tool, it’s Python that’s at fault.

Note that whenever I read reviews of programming languages, I see praise for Python’s syntax. This is nonsense. After a short while, the syntax of all programming languages becomes quirky and weird. Most languages these days are multi-paradigm, a combination of imperative, object-oriented, and functional. Most all are JITted. “Syntax” is the least reason to choose a language. Instead, it’s the choice of support/libraries (which are great for Python), or specific features like tight “systems” memory control (like Rust) or scalable coroutines (like Go). Seriously, stop praising the “elegant” and “simple” syntax of languages.

Regex

Like SQL for database queries, regular expressions aren’t a programming language as such, but still a language you need to learn. They are patterns that match data. For example, if you want to find all social security numbers in a text file, you looked for that pattern of digits and dashes. Such pattern matching is so common that it’s built into most tools, and is a feature of most scripting languages.

One thing to remember from an infosec point of view is that they are highly insecure. Hackers craft content to incorrectly match patterns, evade patterns, or cause “algorithmic complexity” attacks that cause simple regexes to exploded with excessive computation.

You have learn regexes enough to be familiar with the basics, but the syntax can get unreasonably complex, so few master the full regex syntax.

Ruby

Ruby is a great language for writing web apps that makes security easier than with PHP, though like all web apps it still has some issues.

In infosec, the major reason to learn Ruby is Metasploit.

Like Python and JavaScript, it’s also a great command-line scripting language with lots of libraries available. You’ll find it often used in this roll.

Rust

Rust is Mozilla’s replacement language for C and especially C++. It’s supports tight control over memory structures for “systems” programming, but is memory safe so doesn’t have all those vulnerabilities. One of these days I’ll stop programming in C and use Rust instead.

The problem with Rust is that it doesn’t have quite the support that other languages have, like Java or C# for apps, and isn’t as tightly focused on network apps as Go. But as a language, it’s wonderful. In a perfect world, we’d all use JavaScript for scripting tasks and Rust for the backend work. But in the real world, other languages have better support.

SQL

SQL, “structure query language”, isn’t a programming language as such, but it’s still a language of some sort. It’s something that you unavoidably have to learn.

One of the reasons to learn a programming language is to process data. You can do that within a programming language, but an alternative is to shove the data into a database then write queries off that database. I have a server at home just for that purpose, with large disks and multicore processors. Instead of storing things as files, and writing scripts to process those files, I stick it in tables, and write SQL queries off those tables.


Swift

Back in the day, when computers were new, before C++ become the “object oriented” language standard, there was a competing object-oriented version of C known as “Objective C”. Because, as everyone knew, object-oriented was the future, NeXT adopted this as their application programming language. Apple bought NeXT, and thus it became Apple’s programming language.

But Objective C lost the object-oriented war to C++ and became an orphaned language. Also, it was really stupid, essentially two separate language syntaxes fighting for control of your code.

Therefore, a few years ago, Apple created a replacement called Swift, which is largely based on a variant of Rust. Like Rust, it’s an excellent “systems” programming language that has more manual control over memory allocation, but without all the buffer-overflows and memory leaks you see in C.

It’s an excellent language, and great when programming in an Apple environment. However, when choosing a “language” that’s not particularly Apple focused, just choose Rust instead.

Conclusion

As I mentioned above, familiarity with JavaScript, bash/PowerShell, and SQL is unavoidable. So start with those. JavaScript in particular has become a lingua franca, able to do, and do well, almost anything you need a language to do these days, so it’s worth getting into the finder details JavaScript.

However, there’s no One Language to Rule them all. There’s good reasons to learn most languages in this list. For some tasks, the support for a certain language is so good it’s just best to learn that language to solve that task. With the academic focus on Python, you’ll find well-written libraries that solve important tasks for you. If you want to work with a language that other people know, that you can ask questions about, then Python is a great choice.

The exceptions to this are C++ and PHP. They are so obsolete that you should avoid learning them, unless you plan on dealing with legacy.

Was it a Chinese spy or confused tourist?

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/04/was-it-chinese-spy-or-confused-tourist.html

Politico has an article from a former spy analyzing whether the “spy” they caught at Mar-a-lago (Trump’s Florida vacation spot) was actually a “spy”. I thought I’d add to it from a technical perspective about her malware, USB drives, phones, cash, and so on.

The part that has gotten the most press is that she had a USB drive with evil malware. We’ve belittled the Secret Service agents who infected themselves, and we’ve used this as the most important reason to suspect she was a spy.

But it’s nonsense.

It could be something significant, but we can’t know that based on the details that have been reported. What the Secret Service reported was that it “started installing software”. That’s a symptom of a USB device installing drivers, not malware. Common USB devices, such as WiFi adapters, Bluetooth adapters, microSD readers, and 2FA keys look identical to flash drives, and when inserted into a computer, cause Windows to install drivers.

Visual “installing files” is not a symptom of malware. When malware does its job right, there are no symptoms. It installs invisibly in the background. Thats the entire point of malware, that you don’t know it’s there. It’s not to say there would be no visible evidence. A popular way of hacking desktops with USB drives is by emulating a keyboard/mouse that quickly types commands, which will cause some visual artifacts on the screen. It’s just that “installing files” does not lend itself to malware as being the most likely explanation.

That it was “malware” instead of something normal is just the standard trope that anything unexplained is proof of hackers/viruses. We have no evidence it was actually malware, and the evidence we do have suggests something other than malware.

Lots of travelers carry wads of cash. I carry ten $100 bills with me, hidden in my luggage, for emergencies. I’ve been caught before when the credit card company fraud detection triggers in a foreign country leaving me with nothing. It’s very distressing, hence cash.

The Politico story mentioned the “spy” also has a U.S. bank account, and thus cash wasn’t needed. Well, I carry that cash, too, for domestic travel. It’s just not for international travel. In any case, the U.S. may have been just one stop on a multi-country itinerary. I’ve taken several “round the world” trips where I’ve just flown one direction, such as east, before getting back home. $8k is in the range of cash that such travelers carry.

The same is true of phones and SIMs. Different countries have different frequencies and technologies. In the past, I’ve traveled with as many as three phones (US, Japan, Europe). It’s gotten better with modern 4G phones, where my iPhone Xs should work everywhere. (Though it’s likely going to diverge again with 5G, as the U.S. goes on a different path from the rest of the world.)

The same is true with SIMs. In the past, you pretty much needed a different SIM for each country. Arrival in the airport meant going to the kiosk to get a SIM for $10. At the end of a long itinerary, I’d arrive home with several SIMs. These days, however, with so many “MVNOs”, such as Google Fi, this is radically less necessary. However, the fact that the latest high-end phones all support dual-SIMs proves it’s still an issue.

Thus, the evidence so far is that of a normal traveler. If these SIMs/phones are indeed because of spying, we would need additional evidence. A quick analysis of the accounts associated with the SIMs and the of the contents of the phones should tells us if she’s a traveler or spy.

Normal travelers may be concerned about hidden cameras. There’s this story from about Korean hotels filming guests, and this other one about AirBNB problems.

Again we are missing salient details. In the old days, such detectors were analog devices, because secret spy cameras were analog. These days, new equipment is almost always WiFi based. You’d detect more running software on your laptop looking for MAC addresses of camera makers than you would with those older analog devices. Or, there are tricks that look for glinting light off lenses.

Thus, the “hidden camera detector” sounds to me more like a paranoid traveler than a spy.

One of the frequently discussed things is her English language skills. As the Politico story above, her “constant lies” can be explained by difficulties speaking English. In other stories, the agents claim that she both understood and spoke English well.

Both can be true. The ability to speak foreign languages isn’t binary, on or off. I speak French and German in this middle skill level. In some cases, I can hold a conversation with apparent fluency, while in other cases I’m at a complete loss.

One issue is how understanding different speakers varies wildly. I can understand French news broadcasts with little difficulty, with nearly 100% comprehension. On the other hand, watching non-news French TV, like sitcoms, my comprehension goes to near 0%. The same is true of individuals, I many understand nearly everything one person says while understanding nearly nothing another person says.

99% comprehension is still far from 100%. I frequently understand large sections except for one essential key word. Like listening to French news, I understand everything the news story about some event that happened in that country, but I missed the country’s name at the start. Yes, I know there were storms, mudslides, floods, 100,000 without power, 300 deaths — I just haven’t a clue where in the world that happened.

Diplomats around the world recognize this. They often speak English well, use English daily, and yet in formal functions they still use translators, because there’s always a little bit they won’t understand.

Thus, we know any claim by the Secret Service that her language skills were adequate are false.

So in conclusion, we don’t see evidence pointing to a spy. Instead, we see a careful curation of evidence by the secret service and reporters to push the spying story. We haven’t seen any reporter question what other USB devices can cause software to load other than malware. She may be a spy, of course, but so far, there’s no evidence of anything other than a confused/crazy tourist.