All posts by Robert Graham

No, that’s not how warrantee expiration works

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/10/no-thats-not-how-warrantee-expiration.html

The NYPost Hunter Biden story has triggered a lot of sleuths obsessing on technical details trying to prove it’s a hoax. So far, these claims are wrong. The story is certainly bad journalism aiming to misinform readers, but it has not yet been shown to be a hoax.

In this post, we look at claim the timelines don’t match up with the manufacturing dates of the drives. Sleuths claim to prove the drives were manufactured after the events in question, based on serial numbers.

What this post will show is that the theory is wrong. Manufacturers pad warrantee periods. Thus, you can’t assume a date of manufacture based upon the end of a warrantee period.

The story starts with Hunter Biden (or associates) dropping off a laptop at a repair shop because of water damage. The repair shop made a copy of the laptop’s hard drive, stored on an external drive. Later, the FBI swooped in and confiscated both the laptop and that external drive.

The serial numbers of both devices are listed in the subpoena published by the NYPost:

You can enter these serial numbers in the support pages at Apple (FVFXC2MMHV29) and Western Digital (WX21A19ATFF3) to discover precisely what hardware this is, and when the warrantee periods expire — and presumably, when they started.

In the case of that external drive, the 3-year warrantee expires May 17, 2022 — meaning the drive was manufactured on May 17, 2019 (or so they claim). This is a full month after the claimed date of April 12, 2019, when the laptop was dropped off at the repair shop.

There are lots of explanations for this. One of which is that the drive subpoenaed by the government (on Dec 9, 2019) was a copy of the original drive.

But a simpler explanation is this: warrant periods are padded by the manufacturer by several months. In other words, if the warrantee ends May 17, it means the drive was probably manufactured in February.

I can prove this. Coincidentally, I purchased a Western Digital drive a few days ago. If we used the same logic as above to work backward from warrantee expiration, then it means the drive was manufactured 7 days in the future.

Here is a screenshot from Amazon.com showing I purchased the drive Oct 12.

Here is a picture of the drive itself, from which you can read the serial number:

The Date of Manufacture (DOM) is printed right on the device as July 31, 2020.

But let’s see what Western Digital reports as the end of warrantee period:

We can see that the warrantee ends on Oct 25, 2025. According to Amazon where I purchased the drive, the warrantee period is 5 years:

Thus, if we were to insist on working back from the expiration date precisely 5 years, then that means this drive was manufactured 7 days in the future. Today’s date is Oct 16, the warrantee starts Oct 23. 

The reality is that Western Digital has no idea when the drive arrives, and hence when I (as the consumer) expect the warrantee period to start. Thus, they pad the period by a few months to account for how long they expect the device to be in the sales channel, the period between manufacture and when they are likely to arrive at the customer. Computer devices rapidly depreciate so are unlikely to be in the channel more than a few months.

Thus, instead of proving the timeline wrong, the serial number and warrantee expiration shows the timeline right. This is exactly the sort of thing you’d expect if the repair shop recovered the files onto a new external drive.

Another issue in the thread is about the “recovery” of files, which the author claims is improbable. In Apple’s latest MacBooks, if the motherboard is damaged, then it’s impractical to recover the data from the drive. These days, in the year 2020, the SSD drive inside notebooks are soldered right on the motherboard, and besides, encrypted with a TPM chip on the motherboard.

But here we are talking about a 2017 MacBook Pro which apparently had a removeable SSD. Other notebooks by Apple have had special connectors for reading SSDs from dead motherboards. Thus, recovery of files for notebooks of that era is not as impossible as a it sounds.

Moreover, maybe the repair shop fixed the notebook. “Water damage” varies in extent. It may have been possible to repair the damage and boot the device, at least in some sort of recovery mode.

Conclusion

Grabbing serial numbers and looking them is exactly what hackers should be doing in stories like this. Challenging the narrative is great — especially with regards to the NYPost story, which is clearly bad journalism.

On the other hand, it goes both ways. We should be even more concerned about challenging those things that agree with us. This is a great example — it appears we’ve found conclusive evidence that the NYPost story was a hoax. We need to carefully challenge that, too.

No, font errors mean nothing in that NYPost article

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/10/no-font-errors-mean-nothing-in-that.html

The NYPost has an article on Hunter Biden emails. Critics claim that these don’t look like emails, and that there are errors with the fonts, thus showing they are forgeries. This is false. This is how Apple’s “Mail” app prints emails to a PDF file. The font errors are due to viewing PDF files within a web browser — you don’t see them in a PDF app.

In this blogpost, I prove this.

I’m going to do this by creating forged email. The point isn’t to prove the email wasn’t forged, it could easily have been — the NYPost didn’t do due diligence to prove they weren’t forged. The point is simply that that these inexplicable problems aren’t evidence of forgery. All emails printed by the Mail app to a PDF, then displayed with Scribd, will look the same way.

To start with, we are going to create a simple text file on the computer called “erratarob-conspire.eml”. That’s what email messages are at the core — text files. I use Apple’s “TextEdit” app on my MacBook to create the file.

The structure of an email is simple. It has a block of “metadata” consisting of fields separated by a colon “:” character. This block ends with a blank line, after which we have the contents of the email.

Clicking on the file launches Apple’s “Mail” app. It opens the email and renders it on the screen like this:
Notice how the “Mail” app has reformatted the metadata. In addition to displaying the email, it’s making it simple to click on the names to add them to your address book. That’s why there is a (VP) to the right on the screen — it creates a placeholder icon for every account in your address book. I note this because in my version of Mail, the (VP) doesn’t get printed to the PDF, but it does appear in the PDF on the NYPost site. I assume this is because their Mail app is 3 years older than mine.
One thing I can do with emails is to save them as a PDF document.

This creates a PDF file on the disk that we can view like any other PDF file. Note that yet again, the app has reformatted the metadata, different from both how it displayed it on the screen and how it appears in the original email text.

Sometimes web pages, such as this one, wants to display the PDF within the web page. The Scribd website can be used for this purpose, causing PDFs to appear like below:

Erratarob Conspire by asdfasdf

How this shows up on your screen will change depending on a lot of factors. For most people, though, they’ll see slight font problems, especially in the name “Hunter Biden”. Below is a screenshot of how it appears in my browser. You can clearly see how the ‘n’ and ‘t’ characters crowd each other in the name “Hunter”.

Again, while this is a fake email message, any real email message would show the same problems. It’s a consequence of the process of generating a PDF and using Scribd. You can just click through on Scribd to download the original PDF (either mine or the one on the NYPost site), and then use your favorite PDF viewing app. This gets rid of Scribd’s rendering errors.

Others have claimed that this isn’t how email works, that email clients always show brackets around email message, using the < and > characters. Usually, yes, but not in all cases. Here, Apple’s “Mail” app is clearly doing a lot more work to make things look pretty, not showing them.

There are some slight difference between what my 2020 MacBook produces and what the original NYPost article shows. As we can see from the metadata on their PDF, it was produced by a 2017 MacBook. My reproduction isn’t exact, but it’s pretty darn close that we don’t need to doubt it.
We would just apply Occam’s Razor here. Let’s assume that the emails were forged. Then the easiest way would be to create a text document like I’ve shown above and open it in an email client to print out the message. It took me less than a minute, including carefully typing an unfamiliar Russian name. The hardest way would be to use Photoshop or some other technique to manipulate pixels, causing those font errors. Therefore, if you see font problems, the most likely explanation is simply “something I don’t understand” and not “evidence of the conspiracy”.
Conclusion

The problem with conspiracy theories is that everything not explained is used to “prove” the conspiracy.
We see that happening here. If there are unexplained formatting errors in the information the NYPost published, and the only known theory that explains them is a conspiracy, then they prove the conspiracy.
That’s stupid. Unknown things may simply be unknown, that while you can’t explain them doesn’t mean they are unexplainable. That’s what we see here: people are have convinced themselves they have “proof” because of unexplainable formatting errors, when in fact, such formatting can be explained.
The NYPost story has many problems. It is data taken out of context in an attempt to misinform the reader. We know it’s a garbage story, even if all the emails are authentic. We don’t need to invent conspiracy theories to explain it.

Yes, we can validate leaked emails

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/10/yes-we-can-validate-leaked-emails.html

When emails leak, we can know whether they are authenticate or forged. It’s the first question we should ask of today’s leak of emails of Hunter Biden. It has a definitive answer.

Today’s emails have “cryptographic signatures” inside the metadata. Such signatures have been common for the past decade as one way of controlling spam, to verify the sender is who they claim to be. These signatures verify not only the sender, but also that the contents have not been altered. In other words, it authenticates the document, who sent it, and when it was sent.

Crypto works. The only way to bypass these signatures is to hack into the servers. In other words, when we see a 6 year old message with a valid Gmail signature, we know either (a) it’s valid or (b) they hacked into Gmail to steal the signing key. Since (b) is extremely unlikely, and if they could hack Google, they could a ton more important stuff with the information, we have to assume (a).

Your email client normally hides this metadata from you, because it’s boring and humans rarely want to see it. But it’s still there in the original email document. An email message is simply a text document consisting of metadata followed by the message contents.

It takes no special skills to see metadata. If the person has enough skill to export the email to a PDF document, they have enough skill to export the email source. If they can upload the PDF to Scribd (as in the story), they can upload the email source. I show how to below.

To show how this works, I send an email using Gmail to my private email server (from gmail.com to robertgraham.com).

The NYPost story shows the email printed as a PDF document. Thus, I do the same thing when the email arrives on my MacBook, using the Apple “Mail” app. It looks like the following:

The “raw” form originally sent from my Gmail account is simply a text document that looked like the following:

This is rather simple. Client’s insert details like a “Message-ID” that humans don’t care about. There’s also internal formatting details, like the fact that this is a “plain text” message rather than an “HTML” email.

But this raw document was the one sent by the Gmail web client. It then passed through Gmail’s servers, then was passed across the Internet to my private server, where I finally retrieved it using my MacBook.
As email messages pass through servers, the servers add their own metadata.
When it arrived, the “raw” document looked like the following. None of the important bits changed, but a lot more metadata was added:

The bit you care about here is the “DKIM-Signature:” metadata.

This is added by Gmail’s servers, for anything sent from gmail.com. It “authenticates” or “verifies” that this email actually did come from those servers, and that the essential content hasn’t been altered. The long strings of random-looking characters are the “cryptographic signature”. That’s what all crypto is based upon — long chunks of random-looking data.

To extract this document, I used Apple’s “Mail” client program and selected “Save As…” from the “File” menu, saving as “Raw Message Source”.

I uploaded this this document to Scrib so that anybody can download and play with it, such as verifying the signature.
To verify the email signature, I simply open the email document using Thunderbird (Firefox’s email client) with the “DKIM Verifier” extension, which validates that the signature is indeed correct. Thus we see it’s a valid email sent by Gmail and that the key headers have not been changed:

The same could be done with those emails from the purported Hunter Biden laptop. If they can be printed as a PDF (as in the news story) then they can also be saved in raw form and have their DKIM signatures verified.

This sort of thing is extraordinarily easy, something anybody with minimal computer expertise can accomplish. It would go a long way to establishing the credibility of the story, proving that the emails were not forged. The lack leads me to believe that nobody with minimal computer expertise was involved in the story.
The story contains the following paragraph about one of the emails recovered from the drive (the smoking gun claiming Pozharskyi met Joe Biden), claiming how it was “allegedly sent”. Who alleges this? If they have the email with a verifiable DKIM signature, no “alleging” is needed — it’s confirmed. Since Pozharskyi used Gmail, we know the original would have had a valid signature.

The lack of unconfirmed allegations that could be confirmed seems odd for a story of this magnitude.

Note that the NYPost claims to have a copy of the original, so they should be able to do this sort of verification:

However, while they could in theory, it appears they didn’t in practice. The PDF displayed in the story is up on Scribd, allowing anybody to download it. PDF’s, like email, also have metadata, which most PDF viewers will show you. It appears this PDF was not created after Sunday when the NYPost got the hard drive, but back in September when Trump’s allies got the hard drive.

Conclusion

It takes no special skills to do any of this. If the person has enough skill to export the email to a PDF document, they have enough skill to export the email source. Instead of “Export to PDF”, select “Save As … Raw Message Source”. Instead of uploading the .pdf file, upload the resulting .txt to Scribd.
At this point, a journalist wouldn’t need to verify DKIM, or consult an expert: anybody could verify it. There a ton of tools out there that can simply load that raw source email and verify it, such as the Thunderbird example I did above.

Factcheck: Regeneron’s use of embryonic stem cells

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/10/factcheck-regenerons-use-of-embryonic.html

This week, Trump’s opponents misunderstood a Regeneron press release to conclude that the REG-COV2 treatment (which may have saved his life) was created from stem cells. When that was proven false, his opponents nonetheless deliberately misinterpreted events to conclude there was still an ethical paradox. I’ve read the scientific papers and it seems like this is an issue that can be understood with basic high-school science, so I thought I’d write up a detailed discussion.

The short answer is this:

  • The drug is not manufactured in any way from human embryonic tissues.
  • The drug was tested using fetal/embryonic cells, but ones almost 50 years old, not new ones.
  • Republicans want to stop using new embryos, the ethical issue here is the continued use of old embryos, which Republican have consistently agreed to.
  • Yes, the drug is still tainted by the “embryonic stem cell” issue — just not in any of the ways that people claim it is, and not in a way that makes Republicans inconsistent.
  • Almost all medical advances of the last few decades are similarly tainted.
Now let’s do the long, complicated answer. This starts with a discussion of the science of Regeneron’s REG-COV2 treatment.
A well-known treatment that goes back decades is to take blood plasma from a recently recovered patient and give it to a recently infected patient. Blood plasma is where the blood cells are removed, leaving behind water, salts, other particles, and most importantly, “antibodies”. This is the technical concept behind the movie “Outbreak“, though of course they completely distort the science.
Antibodies are produced by the immune system to recognize and latch onto foreign things, including viruses (the rest of this discussion assumes “viruses”). They either deactivate the virus particle, or mark it to be destroyed by other parts of the immune system, or both.
After an initial infection, it takes a while for the body to produce antibodies, allowing the disease to rage unchecked. A massive injection of antibodies during this time allows the disease to be stopped before it gets very far, letting the body’s own defenses catch up. That’s the premise behind Trump’s treatment.
An alternative to harvesting natural antibodies from recently recovered patients is to manufacture artificial antibodies using modern science. That’s what Regeneron did.

An antibody is just another “protein”, the building blocks of the body. The protein is in the shape of a Y with the two upper tips formed to lock onto the corresponding parts of a virus (“antigens”). Every new virus requires a new antibody with different tips.

The SARS-COV-2 virus has these “spike” proteins on it’s surface that allow it to invade the cells in our lungs. They act like a crowbar, jamming themselves into the cell wall, then opening up a hole to allow the rest of the virus inside. Since this is the important and unique protein of the virus, it’s what most antibodies will want to lock onto.
Proteins are created from genes. A new protein, like an antibody with tips identifying a new virus, needs a new gene to create it. In other words, we need new DNA.
This happens in “white blood cells”. Inside these cells, the gene that makes antibodies can easily mutate. When the white blood cell encounters a new object, it mutates that gene slightly, makes some new antibodies, and tests them against the foreign object. The cell then divides, and the child cells do the same thing. Each generation gets better and better and better at creating antibodies. Those tips of the antibody become better and better at locking onto the infecting virus.
Before we go down into Lamarck genetics, we should point out that these genes are not passed down to children. Only a few white blood cells change their DNA, but this doesn’t affect any other cells, especially not the ones in your gonads.
The way Regeneron makes its treatment is to harvest the white blood cells, extract the gene that makes the antibody, then sticks that gene inside some hamster cells to produce copious amounts of the antibody. (Yes, hamsters, but we’ll get to that).
Sometimes human subjects aren’t available as a source of white blood cells. For example, let’s consider a disease that hasn’t infected humans yet, but which has a potential to do so. In that case, you need a factory for white blood cells that isn’t human.
Regeneron has a solution for this: transgenic mice that have the important parts of the human immune system grafted in. This allows them to inject things into the mice to cause this hypermutation of the antibody gene, which they can then harvest.
In the case of their REG-COV2 treatment, Regeneron used both mice and men. They gathered about 200 candidate antibody genes from both sources.
Remember: each time white blood cells mutate to create an antibody, they’ll do it differently. That means everybody’s antibodies are different even though the disease is the same. Even a single patient will have multiple strains of white blood cells mutating in different directions, creating different antibodies, for the same thing.
Thus, from 32 mice and a few human patients, Regeneron got around 200 candidate antibody genes. They then reduced this number down to 4 really good candidates, and then 2 (one from a human, one from a mouse) that they decided to use for manufacturing. These were sent to the hamster factory.
It’s at this point we need to talk about hamsters and immortalized cell lines.
You can keep tissues alive outside the body by bathing them in a nutrient bath, but they won’t grow on their own. But in some cases, you can cause them to grow without end, in which.case, you’ll have an endless supply of those cells. The cell line has then become immortal. This is especially true if the original cells came from a cancer — that’s what cancer is, when the thing that prevents cells from dividing has been broken, and they grow out of control.
Of the many immortalized cell lines used by researchers, some come from adults who consented, some from adults who were never asked (such as the famous HeLa line), some from animals, and of course, some from embryos/fetuses.
One important cell line comes from a Chinese hamster ovary (CHO) that was smuggled out of China. It’s become the preferred source for producing mammal proteins that can’t be produced from yeasts or bacteria. In other words, simple proteins like insulin can be produced from yeast, but complex proteins like antibodies can only be produced within mammals. They insert a human gene into the cell, then encourage it to start dividing into billions of cells containing a copy of that gene.
Note that while the CHO cell line is used for about 50% of the time in this sort of case, about 20% of the time, human cell lines are used. The two human cell lines for doing this are known as HEK293 and PER.C6. Once Regeneron decided upon which genes it wanted to manufacture, it inserted those genes into Chinese hamster ovary (CHO) cells to mass produce the drug. The fact that it was CHO and not the human cell lines is pretty important to this story.
Immortalized cell lines appear in other places in our story. When selecting which of the 200 candidate antibodies it wanted to mass produce, Regeneron tested them for efficacy. It tested against tissues in vitro (a test tube using immortalized cell lines) rather than in vivo (inside a human body). One cell line is “Calu-3“, derived from a 25-year-old lung cancer patient in 1975. Another cell line is “Vero“, derived from the kidney’s of an African green monkey in 1962.
A third test uses proteins made from the “HEK293” cell line from the kidney of a human fetus aborted around 1972-1973 in the Netherlands. This the center of the current controversy.
This test wasn’t necessary to the creation of REG-COV2. It was included with the tests because other researchers used the technique, and that’s what science does, replicates the work of other researchers.
I mention this because while people have reluctantly agreed that REG-COV2 isn’t manufactured from embryos (from the HEK293 or PER.C6 cell lines), they insist that because of this test, it couldn’t have been made without embryonic cells. This is not true, the test wasn’t necessary. In addition, the test could’ve been done in different way, using a different source for the proteins involved. Vaccines are tested in similar ways, some using the ethically questionable cell lines, some not.
But the results are still ethically tainted. The point here isn’t to excuse this taint, but to simply point out it’s different type of taint. There’s a range of ethical questions here. The issue is being politicized to make this one ethical question, when it’s a different ethical question.
This is a good time to talk about the ethics of embryonic stem cells. There are a lot of different technical issue here.
The major issue that upsets Republicans is the harvesting of new material from blastocysts, embryos, and aborted fetuses. This is a wholly separate question of continuing to use old material from 50 years ago.
When President George Bush was elected in 2000, he instituted rulings forbidding harvesting new material, but which allowed the continued use of old material (within the United States). The continued use of HEK293 was explicitly allowed. Likewise, Trump issued an executive order limiting stem cell research. It explicitly targeted harvesting new embryonic cells, while at the same time, explicitly allowed existing lines like HEK293.
Thus, if you are trying to show that Republicans are hypocrites, that their rules change when their own life is at stake, then the evidence doesn’t support your conclusion. Even if the HEK293 cell line was used for manufacturing instead of testing, it still would be consistent with Republican positions. Their concern is to stop the use of exploitation of new embryos.
Now for Catholics, things might be different. The Vatican has repeatedly come down against using old material like HEK293 [a] [b]. They view it along the same lines as using research from Nazi medical experiments on Jews in concentration camps. People ask the ethical question whether the event was so offensive that the medical knowledge can’t be used, even if it saves lives. Even here, though, Catholics have a more nuanced view, allowing just things to be used in practice when there is no alternative.
From that perspective, then all medical research is tainted. For example, our knowledge from all vaccines comes from Edward Jenner’s testing on an unwitting 8 year old boy. Ethics have been continually changing throughout history, if we reject all knowledge from what we now consider to be unethical sources, then we wouldn’t have any medicine. 50 years ago when the HEK293 was acquired, it was under a different understanding of ethics than we have today.
Cell lines like the 50 year old HEK293 are used to test almost every drug. Google those letters and any of the other drugs Trump took in response to his infection, and you’ll find they are all tainted. Moreover, many of the upcoming vaccines will also use these such cell lines to test their efficacy. This may still be an ethically important question, but it’s not the politicized question at stake here.
This piece has tried to avoid getting into the technical weeds. For example, the HEK293 line aren’t “stem cells” but “kidney cells”. But HEK293 still comes from an aborted fetus, and thus, has the same ethical issue as what people understand by “embryonic stem cells”. Instead, I tried to look at the technical issues I feel do matter, like whether this is researchers using a 50 year old line that Republicans have consistently agreed to, versus newly harvested material which they vehemently oppose. Theoretically, somebody could have an issue with “stem cells” even when they come from bone marrow or cord blood, in which case, this article is not for you. I’m pretty sure no such people exist, except those who misunderstand the science. If you feel I’ve glossed over a technical issue (or gotten it wrong), please tell me https://twitter.com/ErrataRob.
Conclusion
This piece is not a defense of Trump but of science. Please vote for Biden on November 3. European countries with leaders to the left of Biden are nonetheless still prosperous. Conversely, when otherwise prosperous democracies have failed, it’s because of leaders as unfit and corrupt as Trump.
This issue started when people gleefully believed they had caught Trump in a trap. When this was proven a misconception, they went searching for other ties to stem cells, and found them. This is still a gross distortion of science — every modern medical treatment can be found to be tainted if you look hard enough. Trying to rescue your misconception by jumping through hoops like this makes you look worse, not better.
The MIT Technology Review article cited above is a particularly egregious example of the politicization of science. It cites Trump’s order on embryonic stem cells while knowingly avoiding what the order actually said, that it was about new vs. old embryos. They knowingly distorted the information to make it look like the consistent position was inconsistent. They knowingly distorted the science to make political points.
There are important areas where science is entangled with politics (e.g. climate change). But it seems like everyone takes the opportunity to irresponsibly distort science to further their politics, as seen here.
Frankly, I’m freaked out by planting a human immune system into mice into order to drive hypermutation, to extract a gene that you then place into an immortal line of hamster ovary cells to produce a crafted protein. I’m sure when somebody makes a movie based on this, it won’t be anything other than dystopic.

Cliché: Security through obscurity (yet again)

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/09/cliche-security-through-obscurity-yet.html

Infosec is a largely non-technical field. People learn a topic only as far as they need to regurgitate the right answer on a certification test. Over time, they start to believe misconceptions about that topic that they never learned. Eventually, these misconceptions displace the original concept in the community.

A good demonstration is this discussion of the “security through obscurity fallacy”. The top rated comment makes the claim this fallacy means “if your only security is obscurity, it’s bad”. Wikipedia substantiates this, claiming experts advise that “obscurity should never be the only security mechanism”.

Nope, nope, nope, nope, nope. It’s the very opposite of what you suppose to understand. Obscurity has problems, always, even if it’s just an additional layer in your “defense in depth”. The entire point of the fallacy is to counteract people’s instinct to suppress information. The effort has failed. Instead, people have persevered in believing that obscurity is good, and that this entire conversation is only about specific types of obscurity being bad.

Hypothetical: non-standard SSH

The above discussion mentions running SSH on a non-standard port, such as 7837 instead of 22, as a hypothetical example.

Let’s continue this hypothetical. You do this. Then an 0day is discovered, and a worm infecting SSH spreads throughout the Internet. This is exactly the sort of thing you were protecting against with your obscurity.

Yet, the outcome isn’t what you expect. Instead, you find that the all your systems running SSH on the standard port of 22 remain uninfected, and that the only infections were of systems running SSH on port 7837. How could this happen?

The (hypothetical) reason is that your organization immediately put a filter for port 22 on the firewalls, scanned the network for all SSH servers, and patched the ones they found. At the same time, the worm runs automated Shodan scripts and masscan, and thus was able to nearly instantaneously discover the non-standard ports.

Thus you cleverness made things worse, not better.

Other phrases

This fallacy has become such a cliche that we should no longer use it. Let’s use other phrases to communicate the concept. These phrases would be:

  • attackers can discover obscured details far better than you think, meaning, obscurity is not as beneficial as you think
  • defenders are hindered by obscured details, meaning, there’s a greater cost to obscurity than you think
  • we can build secure things that don’t depend upon obscurity
  • it’s bad to suppress information that you think would help attackers
  • just because there’s “obscurity” involved doesn’t mean this principle can be invoked
Obscurity less beneficial, more harmful than you think

My hypothetical SSH example demonstrates the first two points. Your instinct is to believe that adding obscurity made life harder for the attackers, and that it had no impact on defenders. The reality is that hackers were far better than you anticipated at finding unusual ports. And at the same time, you underestimated how this would impact defenders.
It’s true that hiding SSH ports might help. I’m just showing an overly negative hypothetical result to counteract your overly positive result. A robust cost-vs-benefit analysis might show that there is in fact a benefit. But in this case, no such robust argument exists — people are just in love with obscurity. Maybe hiding SSH on non-standard ports is actually good, it’s just that nobody has made an adequate argument for it. Lots of people love the idea, however.
We can secure things

The first two points are themselves based upon a more important idea: we can build secure things. SSH is a secure thing.
The reason people love obscurity is because they have no faith in security. They believe that all security can be broken, and therefore, every little extra bit you can layer on top will help.
In our hypothetical above, SSH is seen as something that will eventually fail due to misconfiguration or an exploitable vulnerability. Thus, adding obscurity helps.
There may be some truth to this, but your solution should be to address this problem specifically. For example, every CISO needs to have an automated script that will cause all the alarms in their home (and mobile) to go off when an SSH CVE happens. Sensitive servers need to have canary accounts that will trigger alarms if they ever get compromised. Planning for an SSH failure is good planning.
But not planning for SSH failure, and instead just doing a bunch of handwaving obscuring things, is a bad strategy.
The fact is that we can rely upon SSH and should rely upon SSH. Yes, an 0day might happen, but that, too, should be addressed with known effective solutions, such as tracking CVEs and vulnerability management, not vague things like adding obscurity.
Transparency good, suppression bad

The real point of this discussion isn’t “obscurity” at all, but “transparency”. Transparency is good. And it’s good for security for exactly the same reason it’s good in other areas, such as transparency in government so we can hold politicians accountable. Only through transparency can we improve security.
That was the point of Kerckhoffs’s principle from the 1880s til today: the only trustworthy crypto algorithms are open, public algorithms. Private algorithms are insecure.
It’s the point behind the full-disclosure debate. Companies like Google who fully disclose in 90 days are trustworthy, companies like Oracle who work hard to suppress vuln information are untrustworthy. Companies who give bounties to vuln researchers to publish bugs are trustworthy, those who sue or arrest researchers are untrustworthy.
It’s where security snake oil comes from. Our industry is rife with those who say “trust us … but we can’t reveal details because that would help hackers”. We know this statement to be categorically false. If their system were security, then transparency would not help hackers. QED: hiding details means the system is fundamentally insecure.
It’s like when an organization claims to store passwords security, but refuses to tell you the algorithm, because that would reveal information hackers could use. We know this to be false, because if passwords were actually stored securely, knowing the algorithm wouldn’t help hackers.
Instead of saying the “security through obscurity fallacy” we should instead talk about the “security through suppression fallacy”, or simply say “security comes from transparency”.
This doesn’t apply to all obscurity

This leads to my last point: that just because “obscurity” is happening doesn’t mean we can automatically apply this concept.
Closed-source code is a good example. Why won’t they company share their source code? If they say “because it helps hackers”, then that’s a clear violation of this principle. If they say “because trade secrets”, then it’s not a violation of this principle. They aren’t saying obscurity is needed for security, they are saying obscurity is needed because they don’t want people copying their ideas.
We can still say that the security of closed-source is worse than open-source, because it usually is. The issues are clearly related. It’s simply that the vendor isn’t, in this hypothetical, violating the fallacy by claiming closed-source means their code is more secure.
The same is true in the blogpost above of adding decoy cars to a presidential motorcade. I guess you could use the word “obscurity” here, but it has nothing to do with our principle under discussion. For one thing, we aren’t talking about “suppressing” information. For another thing, presidential motorcades are inherently insecure — this isn’t a crypto algorithm or service like SSH that can be trusted, it’s a real crap system that is inherently insecure. Maybe handwaving with half-assed solutions, like varying travel routes, cellphone jammers to block IEDs, using decoy cars, is the on the whole the best compromise for a bad situation.
Thus, stop invoking this principle every time “obscurity” happens. This just wears out the principle and breeds misunderstanding for the times when we really do need it.
Conclusion

The point of this blogpost is unwinding misconceptions. A couple years from now, I’m likely to write yet another blogpost on this subject, as I discover yet new misconceptions people have developed. I’m rather shocked at this new notion that everyone suddenly believes, that “obscurity” is bad as the only control, but good when added as a layer in a defense-in-depth situation. No, no, no, no … just no.
These misconceptions happen for good reasons. One of which is that we sometimes forget our underlying assumptions, and that people might not share these assumptions.
For example, when we look at Kerckhoffs’ Principle from the 1880s, the underlying assumption is that we can have a crypto algorithm that works, like AES or Salsa20, that can’t be broken. Therefore, adding obscurity on top of this adds no security. But when that assumption fails, such as a presidential motorcade that’s always inherently insecure (just lob a missile at them), then the argument no longer applies.
When teaching this principle, the problem we have is that a lot of people, especially students new to the field, are working from the assumption that everything is broken and that no security can be relied upon. Thus, adding layers of obscurity always seems like a good idea.
Thus, when I say that “security through obscurity is bad”, I’m really using this cliche to express some underlying idea. Am I talking about my political ideas of full-disclosure or open-source? Am I talking about vendor snake-oil? Am I talking about dealing with newbies who prefer unnecessary and ineffective solutions over ones proven to work? It’s hard to tell.
The original discussion linked on Hacker News, though, discussed none of these things. Going through the top ranked responses seemed list a list of people who just heard about the thing yesterday and wanted to give their uninformed hot take on what they think these words mean.
Case Study: ASLR (Address Space Layout Randomization) (Update)
After posting, some have discussed on Twitter whether ASLR is just “security through obscurity”. Let’s discuss this.
The entire point of this post is to raise the level of discussion beyond glibly repeating a cliché. If you have an argument to be made about ASLR, then make that argument without resorting to the cliché. If you think the cost-vs-benefit analysis means ASLR is not worth it, then argue the cost-vs-benefit tradeoff.
The original cliché (from Kerckhoffs principles) wasn’t about whether the algorithm added obscurity, but whether the algorithm itself is obscure.
In other words, if Microsoft was claiming Windows is secure because of ASLR, but that they couldn’t divulge details how it worked because this would help hackers, then you have a “security through obscurity” argument. Only in this instance can you invoke the cliché and be assured you are doing so correctly.
I suppose you could argue that ASLR is only “obscurity”, that it provides no “security”. That’s certainly true sometimes. But it’s false other times. ASLR completely blocks certain classes of attacks on well-randomized 64-bit systems. It’s such a compelling advantage that it’s now a standard part of all 64-bit operating systems. Whatever ASLR does involving “obscurity”, it clearly adds “security”.
In short, just because there’s “obscurity” involved doesn’t mean the cliché “security through obscurity” can be invoked.

How CEOs think

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/07/how-ceos-think.html

Recently, Twitter was hacked. CEOs who read about this in the news ask how they can protect themselves from similar threats. The following tweet expresses our frustration with CEOs, that they don’t listen to their own people, but instead want to buy a magic pill (a product) or listen to outside consultants (like Gartner). In this post, I describe how CEOs actually think.

The only thing more broken than how CEOs view cybersecurity is how cybersecurity experts view cybersecurity. We have this flawed view that cybersecurity is a moral imperative, that it’s an aim by itself. We are convinced that people are wrong for not taking security seriously. This isn’t true. Security isn’t a moral issue but simple cost vs. benefits, risk vs. rewards. Taking risks is more often the correct answer rather than having more security.
Rather than experts dispensing unbiased advice, we’ve become advocates/activists, trying to convince people that they need to do more to secure things. This activism has destroyed our credibility in the boardroom. Nobody thinks we are honest.
Most of our advice is actually internal political battles. CEOs trust outside consultants mostly because outsiders don’t have a stake in internal politics. Thus, the consultant can say the same thing as what you say, but be trusted.
CEOs view cybersecurity the same way they view everything else about building the business, from investment in office buildings, to capital equipment, to HR policies, to marketing programs, to telephone infrastructure, to law firms, to …. everything.
They divide their business into two parts:
  • The first is the part they do well, the thing they are experts at, the things that define who they are as a company, their competitive advantage.
  • The second is everything else, the things they don’t understand.
For the second part, they just want to be average in their industry, or at best, slightly above average. They want their manufacturing costs to be about average. They want the salaries paid to employees to be about average. They want the same video conferencing system as everybody else. Everything outside of core competency is average.
I can’t express this enough: if it’s not their core competency, then they don’t want to excel at it. Excelling at a thing comes with a price. They have to pay people more. They have to find the leaders with proven track records at excelling at it. They have to manage excellence.
This goes all the way to the top. If it’s something the company is going to excel at, then the CEO at the top has to have enough expertise themselves to understand who the best leaders who can accomplish this goal. The CEO can’t hire an excellent CSO unless they have enough competency to judge the qualifications of the CSO, and enough competency to hold the CSO accountable for the job they are doing.
All this is a tradeoff. A focus of attention on one part of the business means less attention on other parts of the business. If your company excels at cybersecurity, it means not excelling at some other part of the business.
So unless you are a company like Google, whose cybersecurity is a competitive advantage, you don’t want to excel in cybersecurity. You want to be average, or at most, slightly above average. You want to do what your peers are doing.
It doesn’t matter that this costs a lot of money due to data breaches. As long as the cost is no more than your competitors, then you are still competitive in your markets.
This is where Gartner comes in. They are an “analyst” firm. They send analysts to talk to you and your competitors to figure out what all of you are doing, then write up reports about what your industry average is.
Yes, yes, it’s all phrased as “best” practices, but it’s really “average” practices. CEOs don’t want to be the best in their industry at cybersecurity, they all want to be slightly above average.
When things hit the news, like this week’s Twitter hack, CEO’s look for a simple product to patch the hole precisely because they don’t want to excel at it. A common cliche in cybersecurity is that “security is not a product, but a process”. But CEOs don’t want a process they have to manage. This would requiring competent leadership, and excelling at cybersecurity, and all the problems with this approach that I describe above. They want to either plug the hole with a quick fix, or let the hole keep leaking. As long as everyone else in their industry has the same problem, it doesn’t need to be fixed.
What CEOs really want to know is “What are our peers doing?”. This is where Gartner comes in, to tell the CEOs what everyone else is doing about the Twitter hack.
It’s not just the Gartners of the world, who are primarily “analysts”, but Big Consulting in general. CEOs listen to cyber consultants from the big accounting companies (e.g. Ernst and Young) and the big tech companies (e.g. IBM). Since the consultants work for a wide variety of clients, they are therefore trusted barometers of what peers are doing in the industry.
They are also trusted because they are outside of internal corporate politics. Outside consultants often end up saying the same thing you do, but are trusted whereas you are not. CEOs listen to the outsiders because they have no hidden political agenda.
There is a flaw in how CEOs think here.
One flaw is that “outside” consultants are steered by those skilled at corporate politics. The consultants know which faction hired them, and thus, tilt their “unbiased” advice toward that faction. Having been a consultant myself, it’s the hardest ethical question I face: how do I maintain my own integrity in the face of the client trying to spin/tilt my reports?
The second flaw is that CEOs are measuring their companies against equally conservative peers. All of them resist some innovation that could reduce costs because none of them have tried it yet. Thus, there’s obvious things that all the techies can see, and yet, the organization resists because none of their peers have tried it yet. Yes, CEOs don’t want to excel at cybersecurity, to be the leader in their industry with the best cybersecurity, but this thinking stops them from being even slightly above average.
The third flaw is that consultants are dumb as rocks. They are just random people who have gone through some training who don’t have to be responsible for the long term consequences of what they do. They don’t reflect the best practices that the industry is doing so much as the dumbest. Most times an organization hires outside consultants there’s smarter people inside the organization fighting against the dumb things the consultants are doing.
All this means that instead of getting the “average” or “slightly above average” out of these outside consultants, CEOs are getting the “below average”. Their IT (and cybersecurity) is slowly sinking, except for the insiders who fight against this.
Thus, we have the fight the tweet describes above. The CEO has an extraordinarily broken view of cybersecurity.
A case study of this is Maersk being nearly destroyed by notPetya. What we techies could see several years ago is that ransomware has become an “existential risk” to the entire business. I saw a business destroy by mass ransomware two years before notPetya, so that such things can happen is not a surprise.
What most organizations see is that occasionally a desktop computer here and there gets ransomwared. They simply wipe it and restore from backup. It’s a cost, but a small cost, and not one worth getting concerned about.
The problem they don’t see is the difference between average users getting infected and domain admins. When a domain admin gets infected, then it can take down the entire enterprise. This means all the desktops and all the servers get infected. It means a massive loss of data and operation, as you realize that not everything was backed up, and that not all servers can be restored to their same operating condition.
That’s what happened to Maersk — all their computers got infected because a domain admin got infected. EVERYTHING got infect, except for one server in Africa that happened to be turned off at the time. That’s what happened to the cities of Atlanta and Baltimore. That’s what’s happened to numerous companies that haven’t hit the news. 
The solution is fairly simple. Microsoft has good guidance on this. It means changing how “domain admin” works so that one person doesn’t hold the keys that’ll wreck the kingdom. Lots of organizations follow Microsoft’s advice and are fairly secure against mass ransomware. Yet still the average for most conservative industries is to not follow this advice — none of their peers have, so why be the first? They are all basically waiting for one of their peers to be destroyed by ransomware, hoping it’s not them, before they take action.
So as an average techy in the industry, I appreciate the above tweet. CEOs and their reliance on magic pills and outside consultants is a pox on our industry. At the same time, their thinking is sound from the point of view of running a business. To fix this, we have to understand their thinking, which hopefully I’ve communicated in this document.
As for CEOs reading this document, well, learn to listen to your techies. Yes, they are also broken in their thinking. But at the same time, they can help you be slightly above average for your industry, and make it so you are the last to be mass ransomwared in your industry rather than the first. If you want to know more about this Twitter incident, then find a techy in your own organization to explain it to you rather than an outside consultant or product vendor.

In defense of open debate

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/07/in-defense-of-open-debate.html

Recently, Harper’s published a Letter on Justice and Open Debate. It’s a rather boring defense of liberalism and the norm of tolerating differing points of view. Mike Masnick wrote rebuttal on Techdirt. In this post, I’m going to rebut his rebuttal, writing a counter-counter-argument.

The Letter said that the norms of liberalism tolerate disagreement, and that these norms are under attack by increasing illiberalism on both sides, both the left and the right.

My point is this: Masnick avoids the rebutting the letter. He’s recycling his arguments against right-wingers who want their speech coddled, rather than the addressing the concerns of (mostly) left-wingers worried about the fanaticism on their own side.

Free speech
Masnick mentions “free speech” 19 times in his rebuttal — but the term does not appear in the Harper’s letter, not even once. This demonstrates my thesis that his rebuttal misses the point.

The term “free speech” has lost its meaning. It’s no longer useful for such conversations.

Left-wingers want media sites like Facebook, YouTube, the New York Times to remove “bad” speech, like right-wing “misinformation”. But, as we’ve been taught, censoring speech is bad. Therefore, “censoring free speech” has to be redefined to as to not include the above effort.

The redefinition claims that the term “free speech” now applies to governments, but not private organizations, that stopping free speech happens only when state power or the courts are involved. In other words, “free speech” is supposed to equate with the “First Amendment”, which really does only apply to government (“Congress shall pass no law abridging free speech”).

That this is false is demonstrated by things like the murders of Charlie Hebdo cartoonist for depicting Muhammad. We all agree this incident is a “free speech” issue, but no government was involved.

Right-wingers agree to this new definition, sort of. In much the same way that left-wingers want to narrow “free-speech” to only mean the First Amendment, right-wingers want to expand the “First Amendment” to mean protecting protecting “free speech” against interference by both government and private platforms. They argue that platforms like Facebook have become so pervasive that they have become the “public square”, and thus, occupy the same space as government. They therefore want regulations that coddle their speech, preventing their content from being removed.

The term “free speech” is therefore no longer useful in explaining the argument because it has become the argument.

The Letter avoids this play on words. It’s not talking about “free speech”, but the “norms of open debate and toleration of differences”. It claims first of all that we have a liberal tradition where we tolerate differences of opinion and that we debate these opinions openly. It claims secondly that these norms are weakening, claiming “intolerant climate that has set in on all sides”.

In other words, those who attacked the NYTimes for publishing the Tom Cotton op-ed are criticized as demonstrating illiberalism and intolerance. This has nothing to do with whatever arguments you have about “free speech”.


Private platforms


Masnick’s free speech argument continues that you can’t force speech upon private platforms like the New York Times. They have the freedom to make their own editorial decisions about what to publish, choosing some things, rejecting others.

It’s a good argument, but one the targets the arguments made by right-wingers hostile to the New York Times, and not arguments made by the left-wing signers of the Letter. The Letter doesn’t attack editorial discretion, but defends it.

Consider the recent Tweet from Pearl Jam:

This tweet is a lie. There were no “laws” back in 1992 that impacted Pearl Jam videos. It was MTV who censored them.

Then as now, cable TV channels like MTV are free to put most anything on the air — because it isn’t the public “air” but a private “wire”. That tweet is not historically accurate.

Rather than the government, it was private parental groups and Christian churches pressuring MTV to alter or remove videos, like those from Pearl Jam.

According to Masnick, the word “censor” is wrong here. He says that if MTV chooses not to play a video (as it does every damn day) then it’s not censorship, it’s just editorial discretion.

This is a joke. Masnick actually said this about NYTimes. I replaced “NYTime” with “MTV” to demonstrate the real point that whether it’s “censorship” depends upon whether it’s the Christian churches attacking your speech, or you attacking the speech of Christian churches.

Even if you refuse to call such acts “censorship” now, we certainly called them “censorship” then throughout the 80s and 90s. This is profoundly portrayed in an episode of WKRP in Cincinnati that deals with a Christian group pressuring advertisers to force the radio station to remove songs from their playlist, like John Lennon’s “Imagine” for being blasphemy. The radio station’s program director exclaims “It’s called censorship”.

Those guilty of censorship in these situations are not WKRP, MTV, or the NYTimes. It’s not them who want the content removed. It’s not their editors who made this editorial decision. Those guilty of censorship are outsiders who pressured these organizations to make editorial decisions they would not have made on their own.

In the Cotton op-ed incident, those guilty of censorship are employees complaining they feel threatened, outsiders calling it inflammatory, readers canceling their subscription, and companies pulling advertising. The NYTimes sin is not in censorship themselves, but being too weak to stand up to censorship. Their sin is making it clear that they will easily cave to controversy. If the NYTimes were the bastion of liberalism that it claims, impartially covering news “without fear or favor“, then it needs to stop responding to fear.

In other words, the “forces of illiberalism” mentioned in the letter are not the media platforms like WKRP, the NYTimes, and the Facebook. Instead, the “forces of illiberalism” are those pressuring platforms to censor content, like Christian groups and left-wing fanatics.

My point here is that every time those like Masnick claim that private organizations have the right to editorial discretion, they miss the point that it’s not their own discretion. They didn’t choose to censor content, they were pressured into it. Nobody is saying that WKRP doesn’t have the right to choose which songs to play, they are saying that it should be WKRP’s choice and not Christian groups.

Consequences

Masnick repeats the common argument that free speech does not mean freedom from consequences. The Letter is not asking for freedom from consequences, but freedom from punishment.

Parents teach their kids not to touch a hot stove because the consequences are that they’ll get burned. They also tell their kids not to get into fights at school because the consequences will be getting grounded for a week.

But the second thing is not a “consequence” so much as a “punishment”. In the first case, the kid is responsible for the consequence of getting burned. In the second, the parent is responsible for grounding the child.

The same applies to speech. Consequences are when people stop listening to you, punishment is when you lose your job.

Consider Mel Gibson’s antisemitic comments back in 2006 that damaged his career. Yes, the consequences of such speech means that many people will stop wanting to see his movies. It’s right and proper that we speak out about what a bad person he is. That’s not the issue. The issue is whether he should be blacklisted by Hollywood, whether people should boycott studios to prevent Gibson movies from being produced, whether they should be removed from catalogs so that people who still want to see Gibson movies are unable to do so.

One is consequences that Gibson is responsible for, the other is punishment the forces of illiberalism are responsible for.

Even then, we can still be concerned about natural consequences. Dangerous children toys like Lawn Darts are banned because the consequences are bad, and we want to avoid those consequences. Personally, I’m less likely to watch a Mel Gibson movie knowing that he’s such an antisemite, but at the same time, I don’t want him losing a career over speech.

Thus, while we should certainly not be defending his antisemitic speech, neither should causing more than the natural consequences in order to punish such speech.

Masnick is working forward from the illiberal right-wingers who want their speech coddled, protected from the least consequence, such as being blocked on Twitter. From that perspective, his argument is exactly right, the right-wing arguments are evil. But the Letter is working backward from incidents like Charlie Hebdo, where cartoonists were murdered as a consequence of drawing cartoons depicting Mohamed. Nobody is trying to excuse that incident as being “consequences of free speech” — we deplore those consequences. We should deplore the consequences of losing ones job over speech as well.

Bizarre

At the heart of liberalism is the principle that reasonable people with good intentions can still disagree over matters of substance.

That means Tom Cotton’s op-ed was well-meaning and reasonable. This is not saying that Cotton was right (I agree that he wasn’t), only that he was reasonable. This isn’t some fringe matter, but one of substance with reasonable people on both sides.

The cancel culture doesn’t believe it was reasonable. They described it as inexplicable, threatening, offensive, inflammatory, dangerous, naive, reprehensible, hurtful, abusive, irresponsible, shameful, hateful, and so on.

That’s the debate. One side sees opinions it disagrees with as being reasonable, the other side sees disagreement as illegitimate.

Masnick doesn’t seem to get this. His piece describes the Letter as “bizarre” in the first sentence. Later, he goes off on a tangent about the word “censorious” to claim the Letter was ill intentioned.

I’m not accusing Masnick of being part of cancel culture here, but simply pointing out the irony. The Letter says “tolerate opposing arguments as being reasonable”, and he counters with “that argument is not reasonable”.

Privilege of Famous Writers

The title of Masnick’s piece is “Harper’s Gives Prestigious Platform To Famous Writers So They Can Whine About Being Silenced”.

This misunderstands what’s going on. These writers are speaking out precisely because they are the ones who can do so without getting silenced. They don’t seek protection of their own speech, they seek protection for open debate in general.

The backlash over signing this innocuous Letter proves their point. The careers of less famous academics and journalists would not survive signing onto this letter. Rather than helping the careers of the signers, it’s hurting them.

The issue affects more than just writers, everyone is under the thumb illiberalism. Few can express their honest opinion without a fellow employee at their company complaining to HR how it makes them feel unwelcome or threatened, thus forcing them into “diversity training” and silence.

This sounds like a caricature, but it’s exactly what happened in the Tom Cotton piece. Cotton said federal troops should subdue rioters. The NYTimes writers union deliberately distorted this, claiming Cotton said to subdue protesters, and that this made them feel threatened. It’s a pattern that repeats itself in HR departments around the country (well, at least the high-tech companies where my friends work, I can’t speak for other companies).

Conclusion

If we ignore this debate over “free speech”, we see that nonetheless, tolerating opposing views is still an essential characteristic of liberalism, and has been for centuries. We can show this by going back to the start of liberalism, such as Voltaire’s comments on religious Tolerance in the 1700s. We can see that today in the fields like journalism and academic inquiry.

In 1896, among the New York Times says that it desires to be:

“a forum for the consideration of all questions of public importance, and to that end to invite intelligent discussion from all shades of opinion”

Inviting Tom Cotton to make his case was liberalism. The attack made by the writer’s guild on the piece was illiberalism.

Even if the problem of people being silenced isn’t as big as the Letter claims, the rise of illiberalism and intolerance of disagreement is still a huge deal. Even if you don’t care about what famous writers have to say, look at your own social media feed. I don’t know about yours, but my Twitter feed is a cesspool of intolerance and illiberalism.

Apple ARM Mac rumors

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/06/apple-arm-mac-rumors.html

The latest rumor is that Apple is going to announce Macintoshes based on ARM processors at their developer conference. I thought I’d write up some perspectives on this.

It’s different this time

This would be Apple’s fourth transition. Their original Macintoshes in 1984 used Motorola 68000 microprocessors. They moved to IBM’s PowerPC in 1994, then to Intel’s x86 in 2005.

However, this history is almost certainly the wrong way to look at the situation. In those days, Apple had little choice. Each transition happened because the processor they were using was failing to keep up with technological change. They had no choice but to move to a new processor.

This no longer applies. Intel’s x86 is competitive on both speed and power efficiency. It’s not going away. If Apple transitions away from x86, they’ll still be competing against x86-based computers.

Other companies have chosen to adopt both x86 and ARM, rather than one or the other. Microsoft’s “Surface Pro” laptops come in either x86 or ARM versions. Amazon’s AWS cloud servers come in either x86 or ARM versions. Google’s Chromebooks come in either x86 or ARM versions.

Instead of ARM replacing x86, Apple may be attempting to provide both as options, possibly an ARM CPU for cheaper systems and an x86 for more expensive and more powerful systems.

ARM isn’t more power efficient than x86

Every news story, every single one, is going to repeat the claim that ARM chips are more power efficient than Intel’s x86 chips. Some will claim it’s because they are RISC whereas Intel is CISC.

This isn’t true. RISC vs. CISC was a principle in the 1980s when chips were so small that instruction set differences meant architectural differences. Since 1995 with “out-of-order” processors, the instruction set has been completely separated from the underlying architecture. At most, instruction set differences can’t account for more than 5% of the difference between processor performance or efficiency.

Mobile chips consume less power by simply being slower. When you scale mobile ARM CPUs up to desktop speeds, they consume the same power as desktops. Conversely, when you scale Intel x86 processors down to mobile power consumption levels, they are just as slow. You can test this yourself by comparing Intel’s mobile-oriented “Atom” processor against ARM processors in the Raspberry Pi.

Moreover, the CPU accounts for only a small part of overall power consumption. Mobile platforms care more about the graphics processor or video acceleration than they do the CPU. Large differences in CPU efficiency mean small differences in overall platform efficiency.

Apple certainly balances its chips so they work better in phones than an Intel x86 would, but these tradeoffs mean they’d work worse in laptops.

While overall performance and efficiency will be similar, specific application will perform differently. Thus, when ARM Macintoshes arrive, people will choose just the right benchmarks to “prove” their inherent superiority. It won’t be true, but everyone will believe it to be true.



No longer a desktop company

Venture capitalist Mary Meeker produces yearly reports on market trends. The desktop computer market has been stagnant for over a decade in the face of mobile growth. The Macintosh is only 10% of Apple’s business — so little that they could abandon the business without noticing a difference.

This means investing in the Macintosh business is a poor business decision. Such investment isn’t going to produce growth. Investing in a major transition from x86 to ARM is therefore stupid — it’ll cost a lot of money without generating any return.

In particular, despite having a mobile CPU for their iPhone, they still don’t have a CPU optimized for laptops and desktops. The Macintosh market is just to small to fund the investment required. Indeed, that’s why Apple had to abandon the 68000 and PowerPC processors before: their market was just too small to fund development to keep those processors competitive.

But there’s another way to look at it. Instead of thinking of this transition in terms of how it helps the Macintosh market, think in terms of how it helps the iPhone market.

A big reason for Intel’s success against all its competitors is the fact that it’s what developers use. I can use my $1000 laptop running Intel’s “Ice Lake” processor to optimize AVX-512 number crunching code, then deploy on a billion dollar supercomputer.

A chronic problem for competing processors has always been that developers couldn’t develop code on them. As a developer, I simply don’t have access to computers running IBM’s POWER processors. Thus, I can’t optimize my code for them.

Developers writing code for ARM mobile phones, either Androids or iPhones, still use x86 computers to develop the code. They then “deploy” that code to mobile phones. This is cumbersome and only acceptable because developers are accustomed to the limitation.

But if Apple ships a Macbook based on the same ARM processor as their iPhone, then this will change. Every developer in the world will switch. This will make development for the iPhone cheaper, and software will be better optimized. Heck, even Android developers will want to switch to using Macbooks as their development platforms.

Another marketing decisions is to simply fold the two together in the long run, such that iOS and macOS become the same operating system. Nobody knows how to do this yet, as the two paradigms are fundamentally different. While Apple may not have a specific strategy on how to get there, they know that making a common hardware platform would be one step in that direction, so a single app could successfully run on both platforms.

Thus, maybe their long term goal isn’t so much to transition Macintoshes to ARM so much as make their iPads and Macbooks indistinguishable, such that adding a bluetooth keyboard to an iPad makes it a Macintosh, and removing the attached keyboard from a Macbook makes it into an iPad.

All tech companies

The model we have is that people buy computers from vendors like Dell in the same way they buy cars from companies like Ford.

This is now how major tech companies work. Companies like Dell don’t build computers so much as assemble them from commodity parts. Anybody can assemble their own computers just as easily as Dell. So that’s what major companies do.

Such customization goes further. Instead of an off-the-shelf operating system, major tech companies create their own, like Google’s Android or Apple’s macOS. Even Amazon has their own version of Linux.

Major tech companies go even further. They design their own programming languages, like Apple’s Swift or Google’s Golang. They build entire “stacks” of underlying technologies instead of using off-the-shelf software.

Building their own CPUs is just the next logical step.

It’s made possible by the change in how chips are made. In the old days, chip designers were the same as chip manufacturers. These days, that’s rare. Intel is pretty much the last major company that does both.

Moreover, instead of designing a complete chip, companies instead design subcomponents. An ARM CPU is just one component. A tech company can grab the CPU design from ARM and combine it without other components, like crypto accelerators, machine learning, memory controllers, I/O controllers, and so on to create a perfect chip for their environment. They then go to a company like TSMC or Global Foundries to fabricate the chip.

For example, Amazon’s $10,000 Graviton 1 server and the $35 Raspberry Pi 4 both use the ARM Cortex A72 microprocessor, but on radically different chips with different capabilities. My own microbenchmarks show that the CPUs run at the same speed, but macrobenchmarks running things like databases and webservers show vastly different performance, because the rest of the chip outside the CPU cores are different.

Apple is custom ARM

When transitioning from one CPU to the next, Apple computers have been able to “emulate” the older system, running old code, though much slower.

ARM processors have some problems when trying to emulate x86. One big problem is multithreaded synchronization. They have some subtle difference which software developers are familiar with, such that multicore code written for x86 sometimes has bugs when recompiled for ARM processors.

Apple’s advantage is that it doesn’t simply license ARM’s designs, but instead designs its own ARM-compatible processors. They are free to add features that make emulation easier, such as x86-style synchronization among threads. Thus, while x86 emulation is difficult for their competitors, as seen on Microsoft’s Surface Pro notebooks, it’ll likely be easier for Apple.

This is especially a concern since ARM won’t be faster. In the previous three CPU changes, Apple went to a much faster CPU. Thus, the slowdown in older apps was compensated by the speedup in new/updated apps. That’s not going to happen this time around, as everything will be slower: a ton slower for emulated apps and slightly slower for ARM apps.

ARM is doing unto Intel

In the beginning, there were many CPU makers, including high-end systems like MIPS, SPARC, PA-RISC, and so on. All the high-end CPUs disappeared (well, essentially).

The reason came down the fact that often you ended up spending 10x the price for a CPU that was only 20% faster. In order to justify the huge cost of development, niche CPU vendors had to charge insanely high prices.

Moreover, Intel would come out with a faster CPU next year that would match yours in speed, while it took you several more years to produce your next generation. Thus, even by the time of your next model you were faster than Intel, the moment in time right before hand you were slower. On average, year after year, you didn’t really provide any benefit.

Thus, Intel processors moved from low-end desktops to workstation to servers to supercomputers, pushing every competing architecture aside.

ARM is now doing the same thing to Intel that Intel did to its competitors.

ARM processors start at the insanely low end. Your computer likely already has a few ARM processors inside, even if it’s an Intel computer running Windows. The harddrive probably has one. The WiFi chip probably has one. The fingerprint reader probably has one. Apple puts an ARM-based security chip in all it’s laptops.

As mobile phones started getting more features, vendors put ARM processors in them. They were incredibly slow, but slow meant they consumed little power. As chip technology got more efficient, batteries held more charge, and consumers became willing to carry around large batteries, ARM processors have gotten steadily faster.

To the point where they compete with Intel.

Now servers and even supercomputers are being built from ARM processors.

The enormous volume of ARM processors means that ARM can put resources behind new designs. Each new generation of ARM Cortex processors gets closer and closer to Intel’s on performance.

Conclusion

ARM is certainly becoming a competitor to Intel. Yet, the market is littered with the corpses of companies who tried to ride this wave and failed. Just Google “ARM server” over the last 10 years to see all the glowing stories of some company releasing an exciting new design only to go out of business a year later. While ARM can be “competitive” in terms of sometimes matching Intel features, it really has no “compelling” feature that makes it better, that makes it worth switching. The supposed power-efficiency benefit is just a myth that never pans out in reality.

Apple could easily make a desktop or laptop based on its own ARM CPU found in the iPhone. The only reason it hasn’t done so already is because of marketing. If they just produce two notebooks like Microsoft, they leave the customer confused as to which one to buy, which leads to many customers buying neither.

One market differentiation is to replace their entire line, and make a complete break with the past as they’ve done three times before. Another differentiation would be something like the education market, or the thin-and-light market, like their previous 12-inch Macbook, while still providing high-end system based on beefy Intel processors and graphics accelerators from nVidia and AMD.

What is Boolean?

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/05/what-is-boolean.html

My mother asks the following question, so I’m writing up a blogpost in response.

I am watching a George Boole bio on Prime but still don’t get it.

I started watching the first few minutes of the “Genius of George Boole” on Amazon Prime, and it was garbage. It’s the typical content that’s been dumbed-down so much that any useful content has been removed. It’s the typical sort of hero worshipping biography that credits the subject with everything that it plausible can.


Boole was a mathematician who tried to apply the concepts of math to statements of “true” and false”, rather than numbers like 1, 2, 3, 4, … He also did a lot of other mathematical work, but it’s this work that continues to bear his name (“boolean logic” or “boolean algebra”).

But what we know of today as “boolean algebra” was really developed by others. They named it after him, but really all the important stuff was developed later. Moreover, the “1” and “0” of binary computers aren’t precisely the same thing as the “true” and “false” of boolean algebra, though there is considerable overlap.

Computers are built from things called “transistors” which act as tiny switches, able to turn “on” or “off”. Thus, we have the same two-value system as “true” and “false”, or “1” and “0”.

Computers represent any number using “base two” instead of the “base ten” we are accustomed to. The “base” of number representation is the number of digits. The number of digits we use is purely arbitrary. The Babylonians had a base 60 system, computers a base 2, but the math we humans use is base 10, probably because we have 10 fingers.

We use a “positional” system. When we run out of digits, we put a ‘1’ on the left side and start over again. Thus, “10” is always the number of digits. If it’s base 8, then once you run out of the first eight digits 01234567, you wrap around and start agains with “10”, which is the value of eight in base 8.

This is in contrast to something like the non-positional Roman numerals, which had symbols for ten (X), hundred (C), and thousand (M).

A binary number is a string of 1s and 0s in base two. The number fifty-three, in binary, is 110101.

Computers can perform normal arithmetic computations on these numbers, like addition (+), subtraction (−), multiplication (×), and division (÷).

But there are also binary arithmetic operation we can do on them, like not (¬), or (∨), xor (⊕), and (∧), shift-left («), and shift-right (»). That’s what we refer to when we say “boolean” arithmetic.

Let’s take a look at the end operation. The and operator means if both the left “and” right numbers are 1, then the result is 1, but 0 otherwise. In other words:

 0 ∧ 0 = 0
 0 ∧ 1 = 0
 1 ∧ 0 = 0
 1 ∧ 1 = 1

There are similar “truth tables” for the other operators.

While the simplest form of such operators are on individual bits, they are more often applied to larger numbers containing many bits, many base two binary digits. For example, we might have two 8-bit numbers and apply the and operator:

 01011100
       ∧
 11001101
       =
 01001100

The result is obtained by applying and to each set of matching bits in both numbers. Both numbers have a ‘1’ as the second bit from the left, so the final result has a ‘1’ in that position.

Normal arithmetic computations are built from the binary. You can show how a sequence of and and or operations can combine to add two numbers. The entire computer chip is built from sequences of these binary operations — billions and billions of them.

Conclusion

Modern computers are based on binary logic. This is often named after George Boole, “boolean logic”, who did some work in this area, but it’s foolish to give him more credit than he deserves. The above Netflix documentary is typical mass-market fodder that gives their subject a truly astounding amount of credit for everything they could plausibly tie to him.

Securing work-at-home apps

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/05/securing-work-at-home-apps.html

In today’s post, I answer the following question:

Our customer’s employees are now using our corporate application while working from home. They are concerned about security, protecting their trade secrets. What security feature can we add for these customers?

The tl;dr answer is this: don’t add gimmicky features, but instead, take this opportunity to do security things you should already be doing, starting with a “vulnerability disclosure program” or “vuln program”.

Gimmicks

First of all, I’d like to discourage you from adding security gimmicks to your product. You are no more likely to come up with an exciting new security feature on your own as you are a miracle cure for the covid. Your sales and marketing people may get excited about the feature, and they may get the customer excited about it too, but the excitement won’t last.

Eventually, the customer’s IT and cybersecurity teams will be brought in. They’ll quickly identify your gimmick as snake oil, and you’ll have made an enemy of them. They are already involved in securing the server side, the work-at-home desktop, the VPN, and all the other network essentials. You don’t want them as your enemy, you want them as your friend. You don’t want to send your salesperson into the maw of a technical meeting at the customer’s site trying to defend the gimmick.

You want to take the opposite approach: do something that the decision maker on the customer side won’t necessarily understand, but which their IT/cybersecurity people will get excited about. You want them in the background as your champion rather than as your opposition.

Vulnerability disclosure program

To accomplish this goal described above, the thing you want is known as a vulnerability disclosure program. If there’s one thing that the entire cybersecurity industry is agreed about (other than hating the term cybersecurity, preferring “infosec” instead) is that you need this vulnerability disclosure program. Everything else you might want to do to add security features in your product come after you have this thing.

Your product has security bugs, known as vulnerabilities. This is true of everyone, no matter how good you are. Apple, Microsoft, and Google employ the brightest minds in cybersecurity and they have vulnerabilities. Every month you update their products with the latest fixes for these vulnerabilities. I just bought a new MacBook Air and it’s already telling me I need to update the operating system to fix the bugs found after it shipped.

These bugs come mostly from outsiders. These companies have internal people searching for such bugs, as well as consultants, and do a good job quietly fixing what they find. But this goes only so far. Outsiders have a wider set of skills and perspectives than the companies could ever hope to control themselves, so find things that the companies miss.

These outsiders are often not customers.

This has been a chronic problem throughout the history of computers. Somebody calls up your support line and tells you there’s an obvious bug that hackers can easily exploit. The customer support representative then ignores this because they aren’t a customer. It’s foolish wasting time adding features to a product that no customer is asking for.

But then this bug leaks out to the public, hackers widely exploit it damaging customers, and angry customers now demand why you did nothing to fix the bug despite having been notified about it.

The problem here is that nobody has the job of responding to such problems. The reason your company dropped the ball was that nobody was assigned to pick it up. All a vulnerability disclosure program means that at least one person within the company has the responsibility of dealing with it.

How to set up vulnerability disclosure program

The process is pretty simple.

First of all, assign somebody to be responsible for it. This could be somebody in engineering, in project management, or in tech support. There is management work involved, opening tickets, tracking them, and closing them, but also at some point, a technical person needs to get involved to analyze the problem.

Second, figure out how the public contacts the company. The standard way is to setup two email addresses, “[email protected]” and “[email protected]” (pointing to the same inbox). These tare the standard addresses that most cybersecurity researchers will attempt to use when reporting a vulnerability to a company. These should point to a mailbox checked by the person assigned in the first step above. A web form for submitting information can also be used. In any case, googling “vulnerability disclosure [your company name]” should yield a webpage describe how to submit vulnerability information — just like it does for Apple, Google, and Microsoft. (Go ahead, google them, see what they do, and follow their lead).

Tech support need to be trained that “vulnerability” is a magic word, and that when somebody calls in with a “vulnerability” that it doesn’t go through the normal customer support process (which starts with “are you a customer?”), but instead gets shunted over the vulnerability disclosure process.

How to run a vuln disclosure program

One you’ve done the steps above, let your program evolve with the experience you get from receiving such vulnerability reports. You’ll figure it out as you go along.

But let me describe some of the problems you are going to have along the way.

For specialty companies with high-value products and a small customer base, you’ll have the problem that nobody uses this feature. Lack of exercise leads to flab, and thus, you’ll have a broken process when a true problem arrives.

You’ll get spam on this address. This is why even though “[email protected]” is the standard address, many companies prefer web forms instead, to reduce the noise. The danger is that whoever has the responsibility of checking the email inbox will get so accustomed to ignoring spam that they’ll ignore legitimate emails. Spam filters help.

For notifications that are legitimately about security vulnerabilities, most will be nonsense. Vulnerability hunting is fairly standard thing in the industry, both by security professionals and hackers. There are lots of tools to find common problems — tools that any idiot can use.

Which means idiots will use these tools, and not understanding the results from the tools, will claim to have found a vulnerability, and will waste your time telling you about it.

At the same time, there are lots of non-native english speakers and native speakers who are just really nerdy, who won’t express themselves well. They will find real bugs, but you won’t be able to tell because their communication is so bad.

Thus, you get these reports, most of which are trash, but a few of which are gems, and you won’t be able to easily tell the difference. It’ll take work on your part, querying the vuln reporter for more information.

Most vuln reporters are emotional and immature. They are usually convinced that your company is evil and stupid. And they are right, after a fashion. When it’s a real issue, it’s probably something that in their narrow expertise that your own engineers don’t quite understand. Their motivation isn’t necessarily to help your engineers understand the problem, but to help you fumble the ball to prove their own superiority and your company’s weakness. Just because they have glaring personality disorders doesn’t mean they aren’t right.

Then there is the issue of extortion. A lot of vuln reports will arrive as extortion threats, demanding money or else the person will make the vuln public, or given to hackers to cause havoc. Many of these threats will demand a “bounty”. At this point, we should talk about “vuln bounty programs…”.

Vulnerability bounty programs

Once you’ve had a vulnerability disclosure program for some years and have all the kinks worked out, you may want to consider a “bounty” program. Instead of simply responding to such reports, you may want to actively encourage people to find such bugs. It’s a standard feature of big tech companies like Google, Microsoft, and Apple. Even the U.S. Department of Defense has a vuln bounty program.

This is not a marketing effort. Sometimes companies offer bounties that claim their product is so secure that hackers can’t possibly find a bug, and they are offering (say) $100,000 to any hacker who thinks they can. This is garbage. All products have security vulnerabilities. Such bounties are full of small print such that any vulns hackers find won’t match the terms, and thus, not get the payout. It’s just another gimmick that’ll get your product labeled snake oil.

Real vuln bounty programs pay out. Google offers $100,000 for certain kinds of bugs and has paid that sum many times. They have one of the best reputations for security in the industry not because they are so good that hackers can’t find vulns, but so responsive to the vulns that are found. They’ve probably paid more in bounties than any other company and are thus viewed as the most secure. You’d think that having so many bugs would make people think they were less secure, but the industry views them in the opposite light.

You don’t want a bounty program. The best companies have vuln bounty programs, but you shouldn’t. At least, you shouldn’t until you’ve gotten the simpler vuln disclosure program running first. It’ll increase the problems I describe above 10 fold. Unless you’ve had experience dealing with the normal level of trouble you’ll get overwhelmed by a bug bounty program.

Bounties are related to the extortion problem described above. If all you have a mere disclosure program without bounties, people will still ask for bounties. These legitimate requests for money may sound like extortion for money.

The seven stages of denial

When doctors tell patients of their illness, they go through seven stages of denial: disbelief, denial, bargaining, guilt, anger, depression, and acceptance/hope.

When real vulns appear in your program, you’ll go through those same stages. Your techies will find ways of denying that a vuln is real.

This is the opposite problem from the one I describe above. You’ll get a lot of trash that aren’t real bugs, but some bugs are real, and yet your engineers will still claim they aren’t.

I’ve dealt with this problem for decades, helping companies with reports where I believe are blindingly obvious and real, which their engineers claim are only “theoretical”.

Take “SQL injection” as a good example. This is the most common bug in web apps (and REST applications). How it works is obvious — yet in my experience, most engineers believe that it can’t happen. Thus, it persists in applications. One reason engineers will deny it’s a bug is that they’ll convince themselves that nobody could practically reverse engineer the details out of their product in order to get it to work. In reality, such reverse engineering is easy. I can either use simple reverse engineering tools on the product’s binary code, or I can intercept live requests to REST APIs within the running program. Security consultants and hackers are extremely experienced at this. In customer engagement, I’ve found impossible to find SQL vulnerabilities within 5 minutes of looking at the product. I’ll have spent much longer trying to get your product installed on my computer, and I really won’t have a clue about what your product actually does, and I’ll already have found a vuln.

Server side vulns are also easier than you expect. Unlike the client application, we can assume that hackers won’t have access to the product. They can still find vulnerabilities. A good example are “blind SQL injection” vulnerabilities, which at first glance appear impossible to exploit.

Even the biggest/best companies struggle with this “denial”. You are going to make this mistake repeatedly and ignore bug reports that eventually bite you. It’s just a fact of life.

This denial is related to the extortion problem I described above.  Take as an example where your engineers are in the “denial” phase, claiming that the reported vuln can’t practically be exploited by hackers. The person who reported the bug then offers to write a “proof-of-concept” (PoC) to prove that it can be exploited — but that it would take several days of effort. They demand compensation before going through the effort. Demanding money for work they’ve already done is illegitimate, especially when threats are involved. Asking for money before doing future work is legitimate — people rightly may be unwilling to work without pay. (There’s a hefty “no free bugs” movement in the community from people refusing to work for free).

Full disclosure and Kerckhoff’s Principle

Your marketing people will make claims about the security of your product. Customers will ask for more details. Your marketing people will respond saying they can’t reveal the details, because that’s sensitive information that would help hackers get around the security.

This is the wrong answer. Only insecure products need to hide the details. Secure products publish the details. Some publish the entire source code, others publish enough details that everyone, even malicious hackers, can find ways around the security features — if such ways exist. A good example is the detailed documents Apple publishes about the security of its iPhones.

This idea goes back to the 1880s and is know as Kerckhoff’s Principle in cryptography. It asserts that encryption algorithms should be public instead of secret, that the only secret should be the password/key. Such secrecy prevents your friends from pointing out obvious flaws but does little to discourage the enemy from reverse engineering flaws.

My grandfather was a cryptographer in WW II. He told a story how the Germans were using an algorithmic “one time pad”. Only, the “pad” wasn’t “one time” as the Germans thought, but instead repeated. Through brilliant guesswork and reverse engineering, the Allies were able to discover that it repeated, and thus were able to completely break this encryption algorithm.

The same is true of your product. You can make it secure enough that even if hackers know everything about it, that they still can’t bypass its security. If your product isn’t that secure, then hiding the details won’t help you much, as hackers are very good at reverse engineering. I’ve tried to describe how unexpectedly good they are in the text above. All hiding the details does is prevent your friends and customers from discovering those flaws first.

Ideally, this would mean publishing source code. In practice, commercial products won’t do this for obvious reasons. But they can still publish enough information for customers to see what’s going on. The more transparent you are about cybersecurity, the more customers will trust you are doing the right thing — and the more vulnerability disclosure reports you’ll get from people discovering you’ve done the wrong thing, so you can fix it.

This transparency continues after a bug has been found. It means communicating to your customer that such a bug happened, it’s full danger, how to mitigate it without a patch, and how to apply the software patch you’ve developed that will fix the bug.

Your sales and marketing people will hate admitting to customers that you had a security bug, but it’s the norm in the industry. Every month when you apply patches from Microsoft, Apple, and Google, they publish full documentation like this on the bug. The best, most trusted companies in the world, have long lists of vulnerabilities in their software. Transparency about their vulns is what make them trusted.

Sure, your competitors will exploit this in order to try to win sales. The response is to point out that this vuln means you have a functioning vuln disclosure program, and that lack of similar bugs from the competitor means they don’t. When it’s you yourself who publishes the information, it means you are trustworthy, that you aren’t hiding anything. When the competitors doesn’t publish such information, it means they are hiding something. Everyone has such vulnerabilities — the best companies admit them.

I’ve been involved in many sales cycles where this has come up. I’ve never found it adversely affected sales. Sometimes it’s been cited as a reason for not buying a product, but by customers who had already made the decision for other reasons (like how their CEO was a cousin of the salesperson) and were just looking for an excuse. I’m not sure I can confidently say that it swung sales the other direction, either, but my general impression is that such transparency has been more positive than negative.

All this is known as full disclosure, the fact that the details of the vuln will eventually become public. The person reporting the bug to you is just telling you first, eventually they will tell everyone else. It’s accepted in the industry that full disclosure is the only responsible way to handle bugs, and that covering them up is irresponsible.

Google’s policy is a good example of this. Their general policy is that anybody who notifies them of vulns should go public in 90 days. This is a little unfair of Google. They use the same 90 day timeframe both for receiving bugs in their product as well as for notifying other companies about bugs. Google has agile development processes such that they can easily release patches within 90 days whereas most other companies have less agile processes that would struggle to release a patch in 6 months.

Your disclosure program should include timeframes. The first is when the discoverer is encouraged to make their vuln public, which should be less than 6 months. There should be other timeframes, such as when they’ll get a response to their notification, which should be one business day, and how long it’ll take engineering to confirm the bug, which should be around a week. At every stage in the process, the person reporting the bug should know the timeframe for the next stage, and an estimate of the final stage when they can go public with the bug, fully disclosing it. Ideally, the person discovering the bug doesn’t actually disclose it because you disclose it first, publicly giving them credit for finding it.

Full disclosure makes the “extortion” problem worse. This is because it’ll appear that those notifying you of the bug are threatening to go public. Some are, and some will happily accept money to keep the bug secret. Others are simply following the standard assumption that it’ll be made public eventually. In other words, that guy demanding money before making a PoC will still go public with his claims in 90 days if you don’t pay him — this is not actually an extortion threat though it sounds like one.

After the vuln disclosure program

Once you start getting a trickle of bug notifications, you’ll start dealing with other issues.

For example, you’ll be encouraged to do secure development. This means putting security in from the very start of development in the requirements specification. You’ll do threat modeling, then create an architecture and design, and so on.

This is fiction. Every company does the steps in reverse order. They start by getting bug reports from the vuln disclosure program. They then patch the code. Then eventually they update their design documents to reflect the change. They then update the requirement’s specification so product management can track the change.

Eventually, customers will ask if you have a “secure development” program of some sort. Once you’ve been responding to vuln reports for a while, you’ll be able to honestly say “yes”, as you’ve actually been doing this, in an ad-hoc manner.

Another thing for products like the one described in this post is zero-trust. It’s the latest buzzword in the cybersecurity industry and means a wide range of different things to different people. But it comes down to this: that instead of using the product over a VPN that the customer could securely use it without the VPN. It means the application, the authentication, and the communication channel are secure even without the added security protections of the VPN.

When supporting workers-at-home, the IT/infosec department is probably following some sort of zero-trust model, either some custom solution, or using products from various companies to help it. They are probably going to demand changes in your product, such as integrating authentication/login with some other system.

Treat these features the same as vulnerability bugs. For example, if your product has it’s own username/password system with passwords stored on your application server, then that’s essentially a security bug. You should instead integrate with other authentication frameworks. Actual passwords stored on your own servers are the toxic waste of the security industry and should be avoided.

At some point, people are going to talk about encryption. Most of its nonsense. Whenever encryption gets put into a requirement spec, something is added that doesn’t really protect data, but which doesn’t matter, because it’s optional and turned off anyway.

You should be using SSL to encrypt communications between the client application and the server. If communications happen in the clear, then that’s a bug. Beyond that, though, I’m not sure I have any clear ideas where to encrypt things.

For products using REST APIs, then you should pay attention to the OWASP list of web app bugs. Sure, a custom Windows application isn’t the same as a public web server, but most of the OWASP bugs apply to anything similar to an application using REST APIs. That includes SQL injection, but a bunch of other bugs. Hackers, security engineers, and customers are going to use the OWASP list when testing your product. If your engineers aren’t all knowledgeable about the OWASP list, it’s certain your product has many of the listed bugs.

When you get a vuln notification for one of the OWASP bugs, then it’s a good idea to start hunting down related ones in the same area of the product.

Outsourcing vuln disclosure

I describe the problems of vuln disclosure programs above. It’s a simple process that nonetheless is difficult to get right.

There are companies who will deal with the pain for you, like BugCrowd or HackerOne. I don’t have enough experience to recommend any of them. I’m often a critic, such as how recently they seem willing to help cover-up bugs that I think should be fully disclosed. But they will have the experience you lack when setting up a vuln disclosure program, and can be especially useful at filtering the incoming nonsense getting true reports. They are also somebody to blame if a true report gets improperly filtered.

Conclusion

Somebody asked “how to secure our work-at-home application”. My simple answer is “avoid gimmicks, instead, do a vulnerability disclosure program”. It’s easy to get started, such setup a “[email protected]” email account that goes to somebody who won’t ignore it. It’s hard to get it right, but you’ll figure it out as you go along.

CISSP is at most equivalent to a 2-year associates degree

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/05/cissp-is-at-most-equivalent-to-2-year.html

There are few college programs for “cybersecurity”. Instead, people rely upon industry “certifications”, programs that attempt to certify a person has the requisite skills. The most popular is known as the “CISSP”. In the news today, European authorities decided a “CISSP was equivalent to a masters degree”. I think this news is garbled. Looking into the details, studying things like “UK NARIK RQF level 11”, it seems instead that equivalency isn’t with master’s “degrees” so much as with post-graduate professional awards and certifications that are common in industry. Even then, it places CISSP at too high a level: it’s an entry level certification that doesn’t require a college degree, and teaches students only familiarity with buzzwords used in the industry rather than the deeper level of understanding of how things work.

Recognition of equivalent qualifications and skills
The outrage over this has been “equivalent to a master’s degree”. I don’t think this is the case. Instead, it seems “equivalent to professional awards and recognition”.
The background behind this is how countries recognize “equivalent” work done in other countries. For example, a German Diplom from a university is a bit more than a U.S. bachelor’s degree, but a bit less than a U.S. master’s degree. How, then, do you find an equivalent between the two?
Part of this is occupational, vocational, and professional awards, certifications, and other forms of recognition. A lot of practical work experience is often equivalent to, and even better than, academic coursework.
The press release here discusses the UK’s NARIC RQF framework, putting the CISSP at level 11. This makes it equivalent to post-graduate coursework and various forms of professional recognition.
I’m not sure it means it’s the same as a “master’s degree”. At RQF level 11, there is a fundamental difference between an “award” requiring up to 120 hours of coursework, a “certificate”, and a “degree” requiring more than 370 hours of coursework. Assuming everything else checks out, this would place the CISSP at the “award” level, not a “certificate” or “degree” level.
The question here is whether the CISSP deserve recognition along with other professional certifications. Below I will argue that it doesn’t.
Superficial not technical

The CISSP isn’t a technical certification. It covers all the buzzwords in the industry so you know what they refer to, but doesn’t explain how anything works. You are tested on the definition of the term “firewall” but you aren’t tested on any detail about how firewalls work.
This is has an enormous impact on the cybersecurity industry with hordes of “certified” professionals who are none-the-less non-technical, not knowing how things work.
This places the CISSP clearly at some lower RQF level. The “RQF level 11” is reserved for people with superior understanding how things work, whereas the CISSP is really an entry-level certification.
No college degree required

The other certifications at this level tend to require a college degree. They are a refinement of what was learned in college.
The opposite is true of the CISSP. It requires no college degree.
Now, I’m not a fan of college degrees. Idiots seem capable of getting such degrees without understanding the content, so they are not a good badge of expertise. But at least the majority of college programs take students deeper into understanding the theory of how things work rather than just the superficial level of the CISSP.
No experience required

The CISSP requires 5 years of job experience, but as far as I can tell, most people fudge it. Most jobs these days involved computers, and most computer jobs have some security component. Therefore, when getting a CISSP, applications exaggerate the security responsibilities of their jobs.
This is why the CISSP is widely regarded as an entry-level certification, as so many holders of the certification are inexperienced.
Grossly outdated

In a rapidly evolving industry, of course such certifications will be outdated. I’m not talking about that.
Instead, I’m talking about how much of the coursework was outdated in the 1980s, such as the Bell-Padula model or the OSI model.
Moreover, I’m not criticizing the certification for having outdated bits — I’m criticizing it for having such low technical standards that they don’t even understand how outdated it is.
They have things like “OSI Session Layer” which nobody really understands.
The OSI Session Layer was a concept from 1970s mainframes that the OSI thought would be important in future network standards, but when the industry moved from mainframes to personal computers in the 1980s, the idea disappeared, to be replaced with new and different “session” concepts that are no longer in a “layer”.
There’s nobody involved in the CISSP tests with sufficient expertise to understand this. Instead, they learned it was important, even though they never really groked it, so they insist in putting it on the tests for the next generation. In other words, it’s not just the test that’s superficial, but the entire organization behind the test.
In contrast, other organizations are run by experts. Those teaching master’s programs hold Ph.Ds, for example.
Corrupt
There isn’t a single university granting degrees. Instead, there are thousands of organizations around the world conferring degrees.
Conversely, the organization behind the CISSP (the ISC2) has a monopoly on the CISSP. They spend considerable effort marketing it, convincing organizations such as the UK NARIC to value it higher than it deserves. It’s an entry level certification that the CISSP tries to convince organizations is worth far more.
It’s a little crooked in its efforts. In an industry that values openness and transparency, the organization is notoriously opaque. It has some of the worst marketing, such as the above press release implying that the CISSP is equivalent to a master’s degree. It never actually says “equivalent to a master’s degree”, but of course, that’s how everyone has interpreted it.
Conclusion

I’m not saying anything is perfect. Academic degrees have their own problems. Other professional certifications have problems. Determined idiots regularly succeed at defeating even most discerning of recognition granting institutions.
The issue here is at what level to place the CISSP. That level is around that of an associates degree, the first two years of university. It’s probably worth undergraduate credit, but not post graduate credit. It’s nowhere near the standard that other post-graduate and professional certifications.


Bonus: if not CISSP, what then?

If the CISSP is crap, what should people use instead?

A computer science degree or notable achievement.

You should have an organization with expertise at the top, with managers having enough expertise themselves to evaluate candidates. There’s a ton of really good people with neither college degrees nor professional certifications out there. Such things are useless to people with so much expertise and experience that such things are far beneath them. Organizations full of such people are the most effective ones.

However, that’s a minority. The majority of jobs are managed by people who can’t judge candidates, who therefore must rely upon third-parties, such as degrees and certificates. Government jobs and some non-tech industry jobs are good example of this.

In such cases, talented people will either rise to lead the teams, and fix them — or get frustrated and leave to find other jobs that value their actual contribution more than their certification.

But if that’s where you are, then I’d hire computer science degree from universities. At least, if the students actually paid attention, they learned how things worked underneath, and can easily learn the cybersecurity buzzwords on top of that knowledge. In contrast, all a CISSP promises is that students learned the buzzwords.

I’m a big critic of academia, I seem to have gotten more out of college than the norm. So many bad people have degrees and so many good people don’t. But at least if we are talking about bad certifications, a bachelors degree is less bad than a CISSP.

That’s not to say the CISSP is all bad. College is out of reach of many people. Getting a CISSP certification is an alternate route into the profession. The point of this post isn’t that the CISSP is all bad, but that’s closer to a 2-year “associate’s degree” than a 4-year “undergraduate degree” or a post-graduate degree.

About them Zoom vulns…

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/04/about-them-zoom-vulns.html

Today a couple vulnerabilities were announced in Zoom, the popular work-from-home conferencing app. Hackers can possibly exploit these to do evil things to you, such as steal your password. Because of the COVID-19, these vulns have hit the mainstream media. This means my non-techy friends and relatives have been asking about it. I thought I’d write up a blogpost answering their questions.

The short answer is that you don’t need to worry about it. Unless you do bad things, like using the same password everywhere, it’s unlikely to affect you. You should worry more about wearing pants on your Zoom video conferences in case you forget and stand up.

Now is a good time to remind people to stop using the same password everywhere and to visit https://haveibeenpwned.com to view all the accounts where they’ve had their password stolen. Using the same password everywhere is the #1 vulnerability the average person is exposed to, and is a possible problem here. For critical accounts (Windows login, bank, email), use a different password for each. (Sure, for accounts you don’t care about, use the same password everywhere, I use ‘Foobar1234’). Write these passwords down on paper and put that paper in a secure location. Don’t print them, don’t store them in a file on  your computer. Writing it on a Post-It note taped under your keyboard is adequate security if you trust everyone in your household.

If hackers use this Zoom method to steal your Windows password, then you aren’t in much danger. They can’t log into your computer because it’s almost certainly behind a firewall. And they can’t use the password on your other accounts, because it’s not the same.

Why you shouldn’t worry

The reason you shouldn’t worry about this password stealing problem is because it’s everywhere, not just Zoom. It’s also here in this browser you are using. If you click on file://hackme.robertgraham.com/foo/bar.html, then I can grab your password in exactly the same way as if you clicked on that vulnerable link in Zoom chat. That’s how the Zoom bug works: hackers post these evil links in the chat window during a Zoom conference.

It’s hard to say Zoom has a vulnerability when so many other applications have the same issue.

Many home ISPs block such connections to the Internet, such as Comcast, AT&TCox, Verizon Wireless, and others. If this is the case, when you click on the above link, nothing will happen. Your computer till try to contact hackme.robertgraham.com, and fail. You may be protected from clicking on the above link without doing anything. If your ISP doesn’t block such connections, you can configure your home router to do this. Go into the firewall settings and block “TCP port 445 outbound”. Alternatively, you can configure Windows to only follow such links internal to your home network, but not to the Internet.

If hackers (like me if you click on the above link) gets your password, then they probably can’t use use it. That’s because while your home Internet router allows outbound connections, it (almost always) blocks inbound connections. Thus, if I steal your Windows password, I can’t use it to log into your home computer unless I also break physically into your house. But if I can break into your computer physically, I can hack it without knowing your password.

The same arguments apply to corporate desktops. Corporations should block such outbound connections. They can do this at their gateway firewall. They can also push policy to all the Windows desktops, so that desktops can only log into local file servers instead of remote ones. They should block inbound connections to this protocol. They should consider using two-factor authentication. If they follow standard practices, they have little to worry about.

If your Windows password is the same as your email password, then you have a potential problem. While I can’t use it to hack your Windows desktop computer, I can use it to hack your email account. Or your bank account. Or your Amazon.com account.

What you should do to protect yourself

By far the most important thing you should do to protect yourself from Internet threats is to use a different password for all your important accounts, like your home computer, your email, Amazon.com, and your bank. Write these down on paper (not a file on your computer). Store copies of that paper in a safe place. I put them in a copy of the book Catcher in the Rye on my bookshelf.

Secondly, be suspicious of links. If a friend invites you to a Zoom conference and says “hey, click on this link and tell me what you think”, then be suspicious. It may not actually be your friend, and the link may be hostile. This applies to all links you get, in applications other than Zoom, like your email client. There are many ways links are a threat other than this one technique.

This second point isn’t good advice: these technologies are designed for you to click on links. It’s impossible to be constantly vigilant. Even experts get fooled occasionally. You shouldn’t depend upon this protecting you. It’s like social distancing and the novel coronavirus: it cuts down on the threat, but doesn’t come close to eliminating it.

Make sure you block outbound port 445. You can configure Windows to do this, your home router, and of course, your ISP may be doing this for you.

Consider using two-factor authentication (such as SMS messages to your mobile phone) or password managers. Increasingly websites don’t manage username/passwords themselves, but instead use Google, Facebook, or Twitter accounts as the login. Pick those in preference to creating a new password protected account. Of course, this means if somebody tricks you to reveal your Google/Facebook/Twitter password you are in trouble, but you can use two-factor authentication for those accounts to make that less likely.

Why this hack works

You are familiar with web addresses like https://google.com/search?q=foobar. The first part of this address, https:// says that it’s a “secure hypertext protocol” address.

Other addresses are possible. One such address is file:// as in the example above. This tells the computer to Microsoft Windows “file server” protocol. This protocol is used within corporate networks, where desktops connect to file servers within the corporate network. When clicking on such a link, your computer will automatically send your username and encrypted password (sic) to log into the file server.

The internal corporate network is just a subset of the entire Internet. Thus, instead of naming servers local to the corporate network, the links can refer to remote Internet servers.

Nobody asks you for your password when you click on such links, either in this webpage, an email, or in Zoom chat. Instead, Windows is supplying the encrypted password you entered when you logged onto your desktop.

The hacker is only stealing the encrypted form of the password, not the original password. Therefore, their next step is to crack the password. This means guessing zillions of possible passwords, encrypting them, and seeing if there’s match. They can do this at rates of billions of guesses per second using specialized hardware and software on their own computers.

That means weak passwords like “Broncos2016” will get cracked in less than a second. But strong passwords like “pUqyQAM6GzWpWEyg” have trillions times a trillion combinations, so that they can’t be guessed/cracked in a billion years, even by the NSA. Don’t take this to mean that you need a “strong password” everywhere. This becomes very difficult to manage. Instead, people choose to use password managers or two-factor authentication or other techniques.

Note that on Windows, if the prefix is missing, it is assumed to be “file:”, so the links may appear as //hackme.robertgraham.com/foo/bar.html or \\hackme.robertgraham.com\foo\bar.html.

Is this overhyped?

Lots of people are criticizing this story as being overhyped. I’m not sure it is. It’s one of those stories that merits publication, yet at the same time, not the widespread coverage for the mainstream. It’s spread further than it normally would have because of all the attention on the pandemic and work-from-home.

I don’t know if Zoom will “fix” this bug. It’s a useful feature on corporate conferences, to point to files on corporate servers. It’s up to companies (and individuals) to protect themselves generally against this threat, because it appears in a wide variety of applications, not just Zoom.

What about the other vuln?

Two vulns were announced. The one that’s gathered everyone’s attention is the “stealing passwords” one. The other vuln is even less dangerous. It allows somebody with access to a Mac to use the Zoom app to gain control over the computer. But if somebody had that much control over your Mac, then they can do other bad things to it.

Summary

In response to this news story, the thing you need to worry about is wearing pants, or making sure other household members wear pants. You never know when the Zoom videoconferencing camera accidentally catches somebody in the wrong pose. Unless you are extremely paranoid, I don’t think you need to worry about this issue in particular.

Huawei backdoors explanation, explained

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/03/huawei-backdoors-explanation-explained.html

Today Huawei published a video explaining the concept of “backdoors” in telco equipment. Many are criticizing the video for being tone deaf. I don’t understand this concept of “tone deafness”. Instead, I want to explore the facts.


This video seems in response to last month’s story about Huawei misusing law enforcement backdoors from the Wall Street Journal. All telco equipment has backdoors usable only by law enforcement, the accusation is that Huawei has a backdoor into this backdoor, so that Chinese intelligence can use it.

That story was bogus. Sure, Huawei is probably guilty of providing backdoor access to the Chinese government, but something is deeply flawed with this particular story.

We know something is wrong with the story because the U.S. officials cited are anonymous. We don’t know who they are or what position they have in the government. If everything they said was true, they wouldn’t insist on being anonymous, but would stand up and declare it in a press conference so that every newspaper could report it. When something is not true or spun, then they anonymously “leak” it to a corrupt journalist to report it their way.

This is objectively bad journalism. The Society of Professional Journalists calls this the “Washington Game“. They also discuss this on their Code of Ethics page. Yes, it’s really common in Washington D.C. reporting, you see it all the time, especially with the NYTimes, Wall Street Journal, and Washington Post. But it happens because what the government says is news, regardless of its false or propaganda, giving government officials the ability to influence journalists. Exclusive access to corrupt journalists is how they influence stories.

We know the reporter is being especially shady because of the one quote in the story that is attributed to a named official:

“We have evidence that Huawei has the capability secretly to access sensitive and personal information in systems it maintains and sells around the world,” said national security adviser Robert O’Brien.

This quote is deceptive because O’Brien doesn’t say any of the things that readers assume he’s saying. He doesn’t actually confirm any of the allegations in the rest of the story.

It doesn’t say.

  • That Huawei has used that capability.
  • That Huawei intentionally put that capability there.
  • That this is special to Huawei (rather than everywhere in the industry).

In fact, this quote applies to every telco equipment maker. They all have law enforcement backdoors. These backdoors always hve “controls” to prevent them from being misused. But these controls are always flawed, either in design or how they are used in the real world.

Moreover, all telcos have maintenance/service contracts with the equipment makers. When there are ways around such controls, even unintentional ones, it’s the company’s own support engineers who will best know them.

I absolutely believe Huawei that it has done at least as much as any vendor to prevent backdoor access to it’s equipment.

At the same time, I also know that Huawei’s maintenance/service abilities have been used for intelligence. Several years ago there was an international incident. My company happened to be doing work with the local mobile company at the time. We watched as a Huawei service engineer logged in using their normal service credentials and queried the VLR databases for all the mobile devices connected to the cell towers nearest the incident in the time in question. After they executed the query, they erased the evidence from the log files.

Maybe this was just a support engineer who was curious. Maybe it was Chinese intelligence. Or, maybe it was the NSA. Seriously, if I were head of the NSA, I’d make it a priority to hack into Huawei’s support departments (or bribe their support engineers) in order to get this sort of backdoor access around the world.

Thus, while I believe Huawei has done as much as any other vendor to close backdoors, I also know that in at least one case where they have abused backdoors.

Now let’s talk about the contents of the video. It classifies “backdoors” in three ways:

  • law-enforcement “front doors”
  • service/maintenance access
  • malicious backdoors

I think their first point is to signal to the FBI that they are on law-enforcement’s side in the crypto-backdoor’s debate. The FBI takes the same twisted definition, that law-enforcement backdoors aren’t backdoors, but front-doors.

It’s still a backdoor, even if it’s for law-enforcement. It’s not in the interests of the caller/callee to be eavesdropped. Thus, from their point of view, the eavesdropping is “malicious”, even if it’s in the interests of society as a whole.

I mention this because this should demonstrate how Huawei’s adoption of the law enforcement point of view backfires. What happens when Chinese intelligence comes to Huawei and demands access in a manner that is clearly legal under Chinese law. By accepting that all law-enforcement demands are legitimate, it means all Chinese government demands are legitimate.

Huawei may be no worse than any other company, but China is worse than free democracies. What is legitimate law-enforcement demands in their country are intolerable in free countries. We’ve had six months of protests in Hong Kong over that issue.

In other words, Huawei is saying they don’t have backdoors because, in fact, they are front-doors for the Chinese government.

In conclusion, I don’t find that Huawei video to be “tone deaf”. Huawei has good reason to believe it’s being unfairly portrayed in “fake news” articles, like the WSJ article I cited above. At the same time, the threat posed by Huawei for Chinese spying is real.

A requirements spec for voting

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/03/a-requirements-spec-for-voting.html

In software development, we start with a “requirements specification” defining what the software is supposed to do. Voting machine security is often in the news, with suspicion the Russians are trying to subvert our elections. Would blockchain or mobile phone voting work? I don’t know. These things have tradeoffs that may or may not work, depending upon what the requirements are. I haven’t seen the requirements written down anywhere. So I thought I’d write some.

One requirement is that the results of an election must seem legitimate. That’s why responsible candidates have a “concession speech” when they lose. When John McCain lost the election to Barack Obama, he started his speech with:

“My friends, we have come to the end of a long journey. The American people have spoken, and they have spoken clearly. A little while ago, I had the honor of calling Sen. Barack Obama — to congratulate him on being elected the next president of the country that we both love.”

This was important. Many of his supporters were pointing out irregularities in various states, wanting to continue the fight. But there are always irregularities, or things that look like irregularities. In every election, if a candidate really wanted to, they could drag out an election indefinitely investigating these irregularities. Responsible candidates therefore concede with such speeches, telling their supporters to stop fighting.

It’s one of the problematic things in our current election system. Even before his likely loss to Hillary, Trump was already stirring up his voters to continue to the fight after the election. He actually won that election, so the fight never occurred, but it was likely to occur. It’s hard to imagine Trump ever conceding a fairly won election. I hate to single out Trump here (though he deserves criticism on this issue) because it seems these days both sides are convinced now that the other side is cheating.

The goal of adversaries like Putin’s Russia isn’t necessarily to get favored candidates elected, but to delegitimize the candidates who do get elected. As long as the opponents of the winner believe they have been cheated, then Russia wins.

Is the actual requirement of election security that the elections are actually secure? Or is the requirement instead that they appear secure? After all, when two candidates have nearly 50% of the real vote, then it doesn’t really matter which one has mathematical legitimacy. It matters more which has political legitimacy.

Another requirement is that the rules be fixed ahead of time. This was the big problem in the Florida recounts in the 2000 Bush election. Votes had ambiguities, like hanging chad. The legislature come up with rules how to resolve the ambiguities, how to count the votes, after the votes had been cast. Naturally, the party in power who comes up with the rules will choose those that favor the party.

The state of Georgia recently pass a law on election systems. Computer scientists in election security criticized the law because it didn’t have their favorite approach, voter verifiable paper ballots. Instead, the ballot printed a bar code.

But the bigger problem with the law is that it left open what happens if tampering is discovered. If an audit of the paper ballots finds discrepancies, what happens then? The answer is the legislature comes up with more rules. You don’t need to secretly tamper with votes, you can instead do so publicly, so that everyone knows the vote was tampered with. This then throws the problem to the state legislature to decide the victor.

Even the most perfectly secured voting system proposed by academics doesn’t solve the problem. It’ll detect voter tampering, but doesn’t resolve when tampering is detected. What do you do with tampered votes? If you throw them out, it means one candidate wins. If you somehow fix them, it means the other candidate wins. Or, you try to rerun the election, in which case a third candidate wins.

Usability is a requirement. A large part of the population cannot read simple directions. By this I don’t mean “the dumb people”, I mean everyone who has struggled to assemble Ikea furniture or a child’s toy.

That’s one of the purposes of voting machines: to help people who would otherwise be confused by paper ballots. It’s why the massive move to electronic machines after the Bush 2000 Florida election, because they were more usable (less confusing).

This has long been a struggle in cybersecurity, as “secure” wars against “usable”. A secure solution that confuses 10% of the population is not a solution. A solution that the entire population find is easy, but which has security flaws, is still preferable.

Election security isn’t purely about the ballot on election day. It includes the process months or years before hand, such as registering the voters or devising what goes into the ballot. It includes the process afterwards when counting/tabulating the votes.

A perfectly secure ballot therefore doesn’t mean a secure election.

Much of the suspected Russian hacking actually involved the voter registration rolls. Tampering with those lists, adding supporters (or fake people) to your own side, or removing supporters of the opponents side, can swing the election.

This leads to one of the biggest problems: voter turnout and disenfranchisement, preventing people from voting. Perfect election security doesn’t solve this.

It’s hard to measure exactly how big the problem is. Both sides are convinced the other side is disenfranchising their own voters. In many cases, it’s a conspiracy theory.

But we do know that voter turnout in the United States is abysmally low versus other countries. In U.S. presidential elections, roughly 50% of eligible voters vote. In other democracies, the percentage is closer to 90%.

This distorts the elections toward extremes. As a candidate, you can choose either the “moderate” position, trying to win some votes from the other side, or you can choose the “extreme” positions, hoping to excite voters to get out and actually vote. Getting 10% more of your voters in the voting booths is better than luring 5% from the other side.

One solution proposed by many is to make election day a national holiday, so that voters don’t have to choose between voting and work. Obviously, this would mean voting on Wednesdays — they may be willing to skip work to vote, but not a three day vacation if voting day were Mondays.

Voting apps on mobile phones have horrible security problems that make us cybersecurity types run screaming away from the solution. On the other hand, mobile phones have the best chance of solving this participation issue, increasing turnout from 50% to 90%. Is cybersecurity risks acceptable if it has such dramatic improvements in participation rates? Conversely, can we describe any system as broken that fails to achieve such rates? Is 90% participation one of our “requirements” that we are failing to meet?

By the way, by “90%” I mean “of people 18 or over”, not “eligible voters”. Obviously, you can improve participation among eligible voters by playing with who is eligible. Many states forbid convicted felons from voting, which sounds like a good idea on its surface, but which is problematic in a democracy that jails 10 times more of its population than other democracies. Whatever legitimate reasons for removing eligibility has to therefore fit within that 90% number.

The best way we have to make voting secure is to make it completely transparent, so that everybody knows how you voted. This is obviously not a viable strategy, because of course that then allows people to coerce/bribe you into voting a certain way. So we want anonymity.

But is perfect anonymity necessary? Many voters don’t care if their vote is public, and indeed, want to proclaim very publicly who they voted for.

Imagine that to secure the mobile voting app, it randomly chooses 1% of votes and makes them public. It would be a risk voters would accept when using the app versus some other voting mechanism.

I mean this as a thought experiment. I choose 1% random selection because it prevent obvious coercion and bribery. But this naive implementation still has flaws. More work needs to be done to stop coercion, and you have to secure the system from hackers who only reveal the votes they haven’t tampered with. But I work with a lot of cryptographic protocols that are able to preserve things in strange ways, so while a naive protocol may be flawed, I’m not sure all are.

In other words, the requirement of the system is not that votes are anonymous, but that votes cannot be coerced or bribed. This is a common problem in software development: the requirements aren’t the actual requirements, but written in a way prejudicial toward a preferred solution. This excludes viable solutions.

This blogpost is about questions not answers. As a software developer, I know that we start with listing the requirements the system is designed to solve. I want to know what those requirements are for “voting in a democracy”. I’m unsuccessful googling for such a list; what I do find fails to include the above ideas, for example. I know that blockchain is a stupid answer to most any question, but on the other hand, I don’t know exactly what this question is, so how can I explicitly call blockchain a stupid solution? Mobile devices have laughable security for voting, but at the same time, our voting has major problems they would solve, so I can’t rule them out either.


This comment makes a good point, as was demonstrated by the DNS Iowa caucuses:

Software development normally works in iterative steps, whereas it has to work right during voting. You can’t expect to patch the app halfway through the voting day.

There’s no evidence the Saudis hacked Jeff Bezos’s iPhone

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/01/theres-no-evidence-saudis-hacked-jeff.html

There’s no evidence the Saudis hacked Jeff Bezos’s iPhone.

This is the conclusion of the all the independent experts who have reviewed the public report behind the U.N.’s accusations. That report failed to find evidence proving the theory, but instead simply found unknown things it couldn’t explain, which it pretended was evidence.

This is a common flaw in such forensics reports. When there’s evidence, it’s usually found and reported. When there’s no evidence, investigators keep looking. Todays devices are complex, so if you keep looking, you always find anomalies you can’t explain. There’s only two results from such investigations: proof of bad things or anomalies that suggest bad things. There’s never any proof that no bad things exist (at least, not in my experience).

Bizarre and inexplicable behavior doesn’t mean a hacker attack. Engineers trying to debug problems, and support technicians helping customers, find such behavior all the time. Pretty much every user of technology experiences this. Paranoid users often think there’s a conspiracy against them when electronics behave strangely, but “behaving strangely” is perfectly normal.

When you start with the theory that hackers are involved, then you have an explanation for the all that’s unexplainable. It’s all consistent with the theory, thus proving it. This is called “confirmation bias”. It’s the same thing that props up conspiracy theories like UFOs: space aliens can do anything, thus, anything unexplainable is proof of space aliens. Alternate explanations, like skunkworks testing a new jet, never seem as plausible.

The investigators were hired to confirm bias. Their job wasn’t to do an unbiased investigation of the phone, but instead, to find evidence confirming the suspicion that the Saudis hacked Bezos.

Remember the story started in February of 2019 when the National Inquirer tried to extort Jeff Bezos with sexts between him and his paramour Lauren Sanchez. Bezos immediately accused the Saudis of being involved. Even after it was revealed that the sexts came from Michael Sanchez, the paramour’s brother, Bezos’s team double-downed on their accusations the Saudi’s hacked Bezos’s phone.

The FTI report tells a story beginning with Saudi Crown Prince sending Bezos a message using WhatsApp containing a video. The story goes:

The downloader that delivered the 4.22MB video was encrypted, delaying or preventing further study of the code delivered along with the video. It should be noted that the encrypted WhatsApp file sent from MBS’ account was slightly larger than the video itself.

This story is invalid. Such messages use end-to-end encryption, which means that while nobody in between can decrypt them (not even WhatsApp), anybody with possession of the ends can. That’s how the technology is supposed to work. If Bezos loses/breaks his phone and needs to restore a backup onto a new phone, the backup needs to have the keys used to decrypt the WhatsApp messages.

Thus, the forensics image taken by the investigators had the necessary keys to decrypt the video — the investigators simply didn’t know about them. In a previous blogpost I explain these magical WhatsApp keys and where to find them so that anybody, even you at home, can forensics their own iPhone, retrieve these keys, and decrypt their own videos.

The above story implicates the encrypted file because it’s slightly larger than than the unencrypted file. One possible explanation is that these extra bytes contain an exploit, virus, or malware.

However, there’s a more accurate explanation: all encrypted WhatsApp videos will be larger than the unencrypted versions by between 10 and 25 bytes, for verification and padding. It’s a standard way how encryption works.

This is a great demonstration of confirmation bias in action, how dragons breed on the edge of maps. When you expect the encrypted and unencrypted versions to be the same size, this anomaly is inexplicable and suggestive of hacker activity. When you know how the encryption works, how there’s always an extra 10 to 25 bytes, then the idea is silly.

It’s important to recognize how much the story hinges on this one fact. They have the unencrypted video and it’s completely innocent. We have the technology to exonerate that video, and it’s exonerated. Thus, if a hack occurred, it must be hidden behind the encryption. But when we unmask the encryption and find only the video we already have, then the entire report will break down. There will no longer be a link between any hack found on the phone and the Saudis.

But even if there isn’t a link to the Saudis, there may still be evidence the phone was hacked. The story from the FTI forensics report continues:

We know from a comprehensive examination of forensics artifacts on Bezos’ phone that within hours of the encrypted downloader being received, a massive and unauthorized exfiltration of data from Bezos’ phone began, continuing and escalating for months thereafter. … The amount of data being transmitted out of Bezos’ phone changed dramatically after receiving the WhatsApp video file and never returned to baseline. Following execution of the encrypted downloader sent from MBS’ account, egress on the device immediately jumped by approximately 29,000 percent.

I’ve performed the same sort of forensics on my phones and have found that there no such thing as some sort of normal “baseline” of traffic, as described in this Twitter thread. One reason is that users do unexpected things, like forward an email that has a large attachment, or visiting a website that causes unexpectedly high amounts of traffic. Another reason is that the traffic isn’t stored in nice hourly or daily buckets as the above story implies. Instead, when you use the app for a months, you get just a single record of how much data the app has sent for months. For example, I see one day where the Uber app exfiltrated 56-megabytes of data from my phone, which seems an inexplicable anomaly. However, that’s just the date the record is recorded, reflecting months of activity as Uber has run in the background on my phone.

I can’t explain all the bizarre stuff I see on my phone. I only ever download podcasts, but the records show the app uploaded 150-megabytes. Even when running over months, this is excessive. But lack of explanation doesn’t mean this is evidence of hacker activity trying to hide traffic inside the podcast app. It just means something odd is going on, probably a bug or inefficient design, that a support engineer might want to know about in order to fix.

Conclusion

Further FTI investigation might find more evidence that actually shows a hack or Saudi guilt, but the current report should be considered debunked. It contains no evidence, only things it’s twisted to create the impression of evidence.

Bezos’s phone may have been hacked. The Saudis may be responsible. They certainly have the means, motive, and opportunity to do so. There’s no evidence exonerating the Saudis as a whole.

But there is evidence that will either prove Saudi culpability or exonerate that one video, the video upon which the entire FTI report hinges. And we know that video will likely be exonerated simply because that’s how technology works.

The entire story hinges on that one video. If debunked, the house of cards fall down, at least until new evidence is found.

The mainstream press has done a crapy job. It’s a single-sourced story starting with “experts say”. But it’s not many experts, just the FTI team. And they aren’t unbiased experts, but those hired specifically to prove Besos’s accusation against the Saudis. Rather than healthy skepticism looking for other experts to dispute the story, the press has jumped in taking Bezos’s side in the dispute.

I am an expert, and as I’ve shown in this blogpost (and linked posts with technical details), I can absolutely confirm the FTI report is complete bunk. It contains no evidence of a hack, just anomalies it pretends are evidence.

How to decrypt WhatsApp end-to-end media files

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/01/how-to-decrypt-whatsapp-end-to-end.html

At the center of the “Saudis hacked Bezos” story is a mysterious video file investigators couldn’t decrypt, sent by Saudi Crown Prince MBS to Bezos via WhatsApp. In this blog post, I show how to decrypt it. Once decrypted, we’ll either have a smoking gun proving the Saudi’s guilt, or exoneration showing that nothing in the report implicated the Saudis. I show how everyone can replicate this on their own iPhones.

The steps are simple:

  • backup the phone to your computer (macOS or Windows), using one of many freely available tools, such as Apple’s own iTunes app
  • extract the database containing WhatsApp messages from that backup, using one of many freely available tools, or just hunt for the specific file yourself
  • grab the .enc file and decryption key from that database, using one of many freely available SQL tools
  • decrypt the video, using a tool I just created on GitHub
End-to-end encrypted downloader

The FTI report says that within hours of receiving a suspicious video that Bezos’s iPhone began behaving strangely. The report says:

…analysis revealed that the suspect video had been delivered via an encrypted downloader host on WhatsApp’s media server. Due to WhatsApp’s end-to-end encryption, the contents of the downloader cannot be practically determined. 

The phrase “encrypted downloader” is not a technical term but something the investigators invented. It sounds like a term we use in malware/viruses, where a first stage downloads later stages using encryption. But that’s not what happened here.
Instead, the file in question is simply the video itself, encrypted, with a few extra bytes due to encryption overhead (10 bytes of checksum at the start, up to 15 bytes of padding at the end).

Now let’s talk about “end-to-end encryption”. This only means that those in middle can’t decrypt the file, not even WhatsApp’s servers. But those on the ends can — and that’s what we have here, one of the ends. Bezos can upgrade his old iPhone X to a new iPhone XS by backing up the old phone and restoring onto the new phone and still decrypt the video. That means the decryption key is somewhere in the backup.

Specifically, the decryption key is in the file named 7c7fba66680ef796b916b067077cc246adacf01d in the backup, in the table named ZWAMDIAITEM, as the first protobuf field in the field named ZMEDIAKEY. These details are explained below.

WhatsApp end-to-end encryption of video

Let’s discuss how videos are transmitted using text messages.
We’ll start with SMS, the old messaging system built into the phone system that predates modern apps. It can only send short text messages of a few hundred bytes at a time. These messages are too small to hold a complete video many megabytes in size. They are sent through the phone system itself, not via the Internet.
When you send a video via SMS what happens is that the video is uploaded to the phone company’s servers via HTTP. Then, a text message is sent with a URL link to the video. When the recipient gets the message, their phone downloads the video from the URL. The text messages going through the phone system just contain the URL, an Internet connection is used to transfer the video.
This happens transparently to the user. The user just sees the video and not the URL. They’ll only notice a difference when using ancient 2G mobile phones that can get the SMS messages but which can’t actually connect to the Internet.
A similar thing happens with WhatsApp, only with encryption added.

The sender first encrypts the video, with a randomly generated key, before uploading via HTTP to WhatsApp’s servers. This means that WhatsApp can’t decrypt the files on their servers.

The sender then sends a message containing the URL and the decryption key to the recipient. This message is encrypted end-to-end, so again, WhatsApp itself cannot decrypt the contents of the message.

The recipient downloads the video from WhatsApp’s server, then decrypts it with the encryption key.

Here’s an example. A friend sent me a video via WhatsApp:

All the messages are sent using end-to-end encryption for this session. As described above, the video itself is not sent as a message, only the URL and a key. These are:
mediakey = TKgNZsaEAvtTzNEgfDqd5UAdmnBNUcJtN7mxMKunAPw=
These are the real values from the above exchange. You can click on the URL and download the encrypted file to your own computer. The file is 22,161,850 bytes (22-megabytes) in size. You can then decrypt it using the above key, using the code shown below. I can’t stress this enough: you can replicate everything I’m doing in this blogpost, to do the things the original forensics investigators hired by Bezos could not.
iPhone backups and file extraction
The forensics report in the Bezos story mentions lots of fancy, expensive tools available only to law enforcement, like Celebrite. However, none these appear necessary to produce their results. It appears you can get the same same results at home using freely available tools.
There are two ways of grabbing all the files from an iPhone. One way is just to do a standard backup of the phone, to iCloud or to a desktop/laptop computer. A better way is to jailbreak the phone and get a complete image of the internal drive. You can do this on an iPhone X (like Bezos’s phone) using the ‘checkm8’ jailbreak. It’s a little complicated, but well within the abilities of techies. A backup gets only the essential files needed to restoring the phone, but a jailbreak gets everything.
In this case, it appears the investigators only got a backup of the phone. For the purposes of decrypting WhatsApp files, it’s enough. As mentioned above, the backup needs these keys in order to properly restore a phone.

You can do this using Apple’s own iTunes program on Windows or macOS. This copies everything off the iPhone onto your computer. The intended purpose is so that if you break your phone, lose it, or upgrade to the latest model, you can easily restore from this backup. However, we are going to use this backup for forensics instead (we have no intention of restoring a phone from this backup).

So now that you’ve copied all the files to your computer, where are they, what are they, and what can you do with them?
Here’s the location of the files. There’s two different locations for Windows, depending upon whether you installed iTunes from Apple or Microsoft.
  • macOS: /Users/username/Library/Application Support/MobileSync/Backup
  • Windows: /Users/username/AppData/Roaming/Apple Computer/MobileSync/Backup
  • Windows: /Users/username/Apple/MobileSync/Backup
The backup for a phone is stored using the unique ID of the phone, the UDID:
Inside the backup directory, Apple doesn’t use the original filenames on the phone. Instead, it stores them using the SHA1 hash of the original filename. The backup directory has 256 subdirectories named 00, 01, 02, …. ff corresponding to the first byte of the hash, each directory containing the corresponding files.

The file we are after is WhatsApp’s ChatStorage.sqlite file, whose full pathname on the iPhone hashes to “7c7fba66680ef796b916b067077cc246adacf01d“.
On macOS, the Backup directory is protected. You have to go into the Security and Privacy settings to give the Terminal app “Full Disk Access” permissions. Then, copy this file to some other directory (like ~) where other apps can get at it.
Note that in the screenshot above, I also gave “iPhone Backup Extractor” permissions. This program provides a GUI that gives files their original names (like “ChatStorage.sqlite”) instead of hashes 7c7fba666… It also has a bunch of built-in logic for extracting things like photos and text messages.
The point of this section is to show that getting these files is simply a matter of copying off your phone and knowing which file to look for.
Working with WhatsApp chat log
In the previous section, I describe how to backup the iPhone, and then retrieve the file ChatStorage.sqlite from that backup. This file contains all your chat messages sent and received on your iPhone. In this section, I describe how to read that file.
This file is an SQL database in standard “sqlite” format. This is a popular open-source projects for embedding SQL databases within apps and it’s used everywhere. This means that you can use hundreds of GUIs, command-line tools, and programming languages to read this file.
I use “sqlitebrowser“, which runs as a GUI on Windows, macOS, and Linux. Below is a screenshot. As you can see, the filename is the file we copied in the step above, the hash of the original name. I then click on “Browse Data” and select the table ZWAMEDIAITEM. I see a list of those URLs in the column ZMEDIAURL, and the corresponding decryption keys in the column ZMEDIAKEY.

The media keys are “blobs” — “binary large objects”. If I click on one of those blobs I see the following as the mediakey:

This binary data is in a format called protobuf. The byte 0x0a means the first field is a variable length string. The next byte 0x20 means the string is 32-bytes long. The next 32-bytes is our encryption key, which I’ve highlighted. The next field (0x12 0x20) is a hash of the file. There are two more fields at the end, but I don’t understand what they are.
So in hex, our encryption key is:
4ca80d66c68402fb53ccd1207c3a9de5401d9a704d51c26d37b9b130aba700fc
Or if encoded in BASE64;
TKgNZsaEAvtTzNEgfDqd5UAdmnBNUcJtN7mxMKunAPw=
We now have the mediaurl and mediakey mentioned above. All we need to do is download the file and decrypt it.
How to decrypt a WhatsApp media file
Now we come to the meat of this blogpost: given a URL and a key, how do we decrypt it? The answer is “unsurprising crypto”. It’s one of most important principles of cryptography that whatever you do should be something boring as normal, as is the case here. If the crypto is surprising and interesting, it’s probably wrong.

Thus, the only question is which of the many standard ways did WhatsApp choose?
Firstly, they chose AES-256, which is the most popular choice for such things these days. It’s key is 256-bits, or 32-bytes. AES is a “block cipher”, which means it encrypts a block at a time. The block size is 16-bytes. When the final block of data is less than 16-bytes, it needs to be padded out to the full length.

But that’s not complete. In modern times we’ve come to realize that simple encryption like this is not enough. A good demonstration of this is the famous “ECB penguin” [1] [2] [3]. If two 16-byte blocks in the input have the same cleartext data, they’ll have the same encrypted data. This is bad, as it allows much to be deduced/reverse-engineered from the encrypted contents even if those contents can’t be decrypted.
Therefore, WhatsApp needs not only an encryption algorithm but also a mode to solve this problem. They chose CBC or “cipher block chaining”, which as the name implies, chains all the blocks together. This is also a common solution.
CBC mode solves the ECB penguin problem of two blocks encrypting the same way, but it still has the problem of two files encrypting the same way, when the first part of the files are the same. Everything up to the first difference will encrypt the same, after which they will be completely different.
This is fixed by adding what’s called an initialization vector or nonce to the start of the file, some random data that’s different for each file. This guarantees that even if you encrypt the same file twice with the same key, the encrypted data will still be completely different, unrelated. The IV/nonce is stripped out when the file is decrypted.
Finally, there is the problem with the encrypted file may be corrupted in transit — accidentally or maliciously. You need to check this with a hash or message authentication code (aka MAC). In the case of WhatsApp, this will be in the first 10 bytes of the encrypted data, which we’ll have to strip out at the end. This MAC is generated by using a different key than the AES key. In other words, we need two keys: one to encrypt the file, and a second to verify that the contents haven’t been changed.
This explains why there was a 14 byte difference between the encrypted video and unencrypted video. The encrypted data needed 10 bytes for a MAC at the start, and 4 bytes for padding at the end.
The code

Here is the code that implements all the above stuff:
At the top of the file I’ve hard-coded the values for the mediaurl and mediakey to the ones I found above in my iPhone backup.
The mediakey is only 32-bytes, but we need more. We need 32-bytes for the AES-256 key, another 16-bytes for the initialization vector, and 32-bytes for the message authentication key.
This is common problem is solved by using a special pseudo-randomization function to expand a small amount of data into a larger amount of data, in this case from 32-bytes to 112-bytes. The standard WhatsApp chose is “HMAC Key Derivation Function“. This is expressed in my code as the following, where I expand the key into the IV, cipherkey, and mackey.:
mediaKeyExpanded=HKDF(base64.b64decode(mediaK),112,salt)
iv=mediaKeyExpanded[:16]
cipherKey= mediaKeyExpanded[16:48]
macKey=mediaKeyExpanded[48:80]
Then, I download the file from the URL. I have to strip the first 10 bytes from the file, which is the message authentication code.
mediaData= urllib2.urlopen(mediaurl).read()
file= mediaData[:-10]
mac= mediaData[-10:]
Then using the cipherkey from the first step, I decrypt the file. I have to strip the padding at the end of the file.
decryptor = AES.new(cipherKey, AES.MODE_CBC, iv)
imgdata=AESUnpad(decryptor.decrypt(file))
To download and decrypt the video, simply run the program as such:
I’m not going to link to the video myself. If you want to know what it contains, you are going to have to run the program yourself.

Remember that this example is a video a friend sent to me, and not the original video sent by MBS to Bezos. But the same principle applies. Simply look in that file in the backup, extract the URL and mediakey, insert into this program, and you’ll get that file decrypted.

Conclusion

The report from FTI doesn’t find evidence. Instead, it finds the unknown. It can’t decrypt the .enc file from WhatsApp. It therefore concludes that it must contains some sort of evil malware hidden on that that encryption — encryption which they can’t break.

But this is nonsense. They can easily decrypt the file, and prove conclusively whether it contains malware or exploits.

They are reluctant to do this because then their entire report would fall apart. Their conclusion is based upon Bezos’s phone acting strange after receiving that video. If that video is decrypted and shown not to contain a hack of some sort, then the rest of the reasoning is invalid. Even if they find other evidence that Bezos’s phone was hacked, there would no longer anything linking to the Saudis.


So that tweet was misunderstood

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/12/when-tweets-are-taken-out-of-context.html

I’m currently experiencing the toxic hell that is a misunderstood tweet going viral. It’s a property of the social media. The more they can deliberately misunderstand you, the more they can justify the toxicity of their response. Unfortunately, I had to delete it in order to stop all the toxic crud and threats of violence.

The context is how politicians distort everything. It’s like whenever they talk about sea level rise, it’s always about some city like Miami or New Orleans that is sinking into the ocean already, even without global warming’s help. Pointing this out isn’t a denial of global warming, it’s pointing out how we can’t talk about the issue without exaggeration. Mankind’s carbon emissions are indeed causing sea level to rise, but we should be talking about how this affects average cities, not dramatizing the issue with the worst cases.

The same it true of health care. It’s a flawed system that needs change. But we don’t discuss the people making the best of all bad choices. Instead, we cherry pick those who made the worst possible choice, and then blame the entire bad outcome on the system.

My tweet is in response to this Elizabeth Warren reference to a story were somebody chose the worst of several bad choices:


My tweet is widely misunderstood as saying “here’s a good alternative”, when I meant “here’s a less bad alternative”. Maybe I was wrong and it’s not “less bad”, but nobody has responded that way. All the toxic spew on Twitter has been based on their interpretation that I was asserting it was “good”.

And the reason I chose this particular response is because I thought it was a Democrat talking point. As Bernie Sanders (a 2020 presidential candidate) puts it:

“The original insulin patent expired 75 years ago. Instead of falling prices, as one might expect after decades of competition, three drugmakers who make different versions of insulin have continuously raised prices on this life-saving medication.”

This is called “evergreening”, as described in articles like this one that claim insulin makers have been making needless small improvements to keep their products patent-protected, so that they don’t have to compete against generics whose patents have expired.

It’s Democrats like Bernie who claim expensive insulin is little different than cheaper insulin, not me. If you disagree, go complain to him, not me.

Bernie is wrong, by the way. The more expensive “insulin analogs” result in dramatically improved blood sugar control for Type 1 diabetics. The results are life changing, especially when combined with glucose monitors and insulin pumps. Drug companies deserve to recoup the billions spent on these advances. My original point is still true that “cheap insulin” is better than “no insulin”, but it’s also true that it’s far worse than modern, more expensive insulin.

Anyway, I wasn’t really focused on that part of the argument but the other part, how list prices are an exaggeration. They are a fiction that nobody needs to pay, even those without insurance. They aren’t the result of price gouging by drug manufacturers, as Elizabeth Warren claims. But politicians like Warren continue to fixate on list prices even when they know they are inaccurate.

The culprit for high list prices isn’t the drug makers, but middlemen in the supply chain known as “pharmacy benefits managers” or “PBMs”. Serious politicians focus on PBMs and getting more transparency in the system, populist presidential candidates blame “Big Pharma”.

PBMs negotiate prices between insurers, pharmacies, and drug makers. Their incentive is to maximize the rebates coming back from drug manufacturers. As prices go up, so do rebates, leaving the actual price people pay, and the actual price drug makers earn, unchanged. You can see this in the drug makers’ SEC profit/loss filings. If drug makes are “price gouging”, it’s not showing up on their bottom line.

It’s PBMs that have the market power. The largest PBMs are bigger than the largest drug manufacturers, as the Wikipedia article explains. They are the ones with the most influence on prices.

PBM’s primary customer is insurance companies, but they’ll happily do business with the uninsured. Free drug discount cards are widely available. There’s also websites like GoodRX.com that do the same thing. You don’t need to pay them money, or even sign up with them. Simply go to the site, search for that expensive insulin you need, and print out a free coupon that gives you 50% to 80% off at your local pharmacy.

The story cited by Elizabeth Warren claims the drug in question cost $275, but according to GoodRX, it can be gotten for $68.

This coupon is good for buying lispro at Walgreens in Georgia, maybe elsewhere

Mentioning PBMs is really weird. People haven’t heard of them, don’t understand them, so when you mention them, people don’t hear you. They continue as if you’ve said nothing at all. Yet, they are the most important part of the debate over high drug prices in America.

The point wasn’t to argue drug policy. That’s the underlying misunderstanding here, that I’m arguing either a Democrat or Republican side of the health debate. Instead, I’m arguing against both Republicans and Democrats. I have little opinion on the issue other than I’d like to emulate well-run countries like Singapore or Switzerland. I’m simply pointing out that whenever I investigate politician’s statements, I find inaccuracies, exaggerations, and deliberate deceptions.

Maybe I’m wrong and Warren’s tweet wasn’t exaggerated, but that still doesn’t justify the toxic spew.

What’s interesting about this is how those who most decry toxic behavior on Twitter were among the most toxic in their response. Toxicity isn’t a property of what you do, but of which side you are on when you do it. Threats of violence are only bad when targeting “good” people, not when targeting bad people like me.

This is finally the year of the ARM server

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/12/this-is-finally-year-of-arm-server.html

“RISC” was an important architecture from the 1980s when CPUs had fewer than 100,000 transistors. By simplifying the instruction set, they free up transistors for more registers and better pipelining. It meant executing more instructions, but more than making up for this by executing them faster.

But once CPUs exceed a million transistors around 1995, they moved to Out-of-Order, superscalar architectures. OoO replaces RISC by decoupling the front-end instruction-set with the back-end execution. A “reduced instruction set” no longer matters, the backend architecture differs little between Intel and competing RISC chips like ARM. Yet people have remained fixated on instruction set. The reason is simply politics. Intel has been the dominant instruction set for the computers we use on servers, desktops, and laptops. Many instinctively resist whoever dominates. In addition, colleges indoctrinate students on the superiority of RISC. Much college computer science instruction is decades out of date.

For 10 years, the ignorant press has been championing the cause of ARM’s RISC processors in servers. The refrain has always been that RISC has some inherent power efficiency advantage, and that ARM processors with natural power efficiency from the mobile world will be more power efficient for the data center.

None of this is true. There are plenty of RISC alternatives to Intel, like SPARC, POWER, and MIPS, and none of them ended up having a power efficiency advantage.

Mobile chips aren’t actually power efficient. Yes, they consume less power, but because they are slower. ARM’s mobile chips have roughly the same computations-per-watt as Intel chips. When you scale them up to the same amount of computations as Intel’s server chips, they end up consuming just as much power.

People are essentially innumerate. They can’t do this math. The only factor they know is that ARM chips consume less power. They can’t factor into the equation that they are also doing fewer computations.

There have been three attempts by chip makers to produce server chips to complete against Intel. The first attempt was the “flock of chickens” approach. Instead of one beefy OoO core, you make a chip with a bunch of wimpy traditional RISC cores.

That’s not a bad design for highly-parallel, large-memory workloads. Such workloads spread themselves efficiently across many CPUs, and spend a lot of time halted, waiting for data to be returned from memory.

But such chips didn’t succeed in the market. The basic reason was that interconnecting all the cores introduced so much complexity and power consumption that it wasn’t worth the effort.

The second attempt was multi-threaded chips. Intel’s chips support two threads per core, so that when one thread halts waiting for memory, the other thread can continue processing what’s already stored in cache and in registers. It’s a cheap way for processors to increase effective speed while adding few additional transistors to the chip. But it has decreasing marginal returns, which is why Intel only supports two threads. Vendors created chips with as many as 8 threads per core. Again, they were chasing the highly parallel workloads that waited on memory. Only with multithreaded chips, they could avoid all that interconnect nastiness.

This still didn’t work. The chips were quite good, but it turns out that these workloads are only a small portion of the market.

Finally, chip makers decided to compete head-to-head with Intel by creating server chips optimized for the same workloads as Intel, with fast single-threaded performance. A good example was Qualcomm, who created a server chip that CloudFlare promised to use. They announced this to much fanfare, then abandoned it a few months later as nobody adopted it.

The reason was simply that when you scaled to Intel-like performance, you have Intel-like liabilities. Your only customers are the innumerate who can’t do math, who believe like emperors that their clothes are made from the finest of fabrics. Techies who do the math won’t buy the chip, because any advantage is marginal. Moreover, it’s a risk. If they invest heavily in the platform, how do they know that it’ll continue to exist and keep up with Intel a year from now, two years, ten years? Even if for their workloads they can eke out 10% benefit today, it’s just not worth the trouble when it gets abandoned two years later.

Thus, ARM server processors can be summarized by this: the performance and power efficiencies aren’t there, and without them, there’s no way the market will accept them as competing chips to Intel.

This brings us to chips like Graviton2, and similar efforts at other companies like Apple and Microsoft. I’m pretty sure it is going to succeed.

The reason is the market, rather than the technology.

The old market was this: chip makers (Intel, AMD, etc.) sold to box makers (Dell, HP, etc.) who sold to Internet companies (Amazon, Rackspace, etc.).

However, this market has been obsolete for a while. The leading Internet companies long ago abandoned the box vendors and started making their own boxes, based on Intel chips.

Making their own chips, making the entire computer from the ground up to their specs, is the next logical evolution.

This has been going on for some time, we just didn’t notice. Most all the largest tech companies have their own custom CPUs. Apple has a custom ARM chip in their iPhone. Samsung makes custom ARM chips for their phones. IBM has POWER and mainframe chips. Oracle has (or had) SPARC. Qualcomm makes custom ARM chips. And so on.

In the past, having your own CPU meant having your own design, your own instruction set, your own support infrastructure (like compilers), and your own fabs for making such chips. This is no longer true. You get CPU designs from ARM, then have a fab like TSMC manufacture the chip. Since it’s ARM, you get for free all the rest of the support infrastructure.

Amazon’s Graviton1 chip was the same CPU core (ARM Cortex A72) as found in the Raspberry Pi 4. Their second generation Graviton2 chip has the same CPU core (ARM Cortex A76) as found in Microsoft’s latest Windows Surface notebook computer.

Amazon doesn’t care about instruction set, or whether a chip is RISC. It cares about the rest of the feature of the chip. For example, their chips support encrypted memory, a feature that you might want in a cloud environment that hosts content from many different customers.

Recently, Sony and Microsoft announced their next-gen consoles. Like their previous generation, these are based on custom AMD designs. Gaming consoles have long been the forerunners of this new market: shipping in high enough volumes that they can get a custom design for their chip. It’s just that Amazon, through its cloud instances, is now of sufficient scale, that they can sell as many instances as game consoles.

The upshot is that custom chips are becoming less and less a barrier, just like custom boxes became less of a barrier a decade ago. More and more often, the world’s top tech companies will have their own chip. Sometimes, this will be in partnership with AMD with an x86 chip. Most of the time, it’ll be the latest ARM design, manufactured on TSMC or Samsung fabs. IBM will still have POWER and mainframe chips for their legacy markets. Sometimes you’ll have small microcontroller designs, like Western Digital’s RISC-V chips. Intel’s chips are still very good, so their market isn’t disappearing. However, the market for companies like Dell and HP is clearly a legacy market, to be thought of in the same class as IBM’s still big mainframe market.

CrowdStrike-Ukraine Explained

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/09/crowdstrike-ukraine-explained.html

Trump’s conversation with the President of Ukraine mentions “CrowdStrike”. I thought I’d explain this.

What was said?

This is the text from the conversation covered in this

“I would like you to find out what happened with this whole situation with Ukraine, they say Crowdstrike… I guess you have one of your wealthy people… The server, they say Ukraine has it.”

Personally, I occasionally interrupt myself while speaking, so I’m not sure I’d criticize Trump here for his incoherence. But at the same time, we aren’t quite sure what was meant. It’s only meaningful in the greater context. Trump has talked before about CrowdStrike’s investigation being wrong, a rich Ukrainian owning CrowdStrike, and a “server”. He’s talked a lot about these topics before.


Who is CrowdStrike?

They are a cybersecurity firm that, among other things, investigates hacker attacks. If you’ve been hacked by a nation state, then CrowdStrike is the sort of firm you’d hire to come and investigate what happened, and help prevent it from happening again.

Why is CrowdStrike mentioned?

Because they were the lead investigators in the DNC hack who came to the conclusion that Russia was responsible. The pro-Trump crowd believes this conclusion is false. If the conclusion is false, then it must mean CrowdStrike is part of the anti-Trump conspiracy.

Trump always had a thing for CrowdStrike since their first investigation. It’s intensified since the Mueller report, which solidified the ties between Trump-Russia, and Russia-DNC-Hack.

Personally, I’m always suspicious of such investigations. Politics, either grand (on this scale) or small (internal company politics) seem to drive investigations, creating firm conclusions based on flimsy evidence. But CrowdStrike has made public some pretty solid information, such as BitLy accounts used both in the DNC hacks and other (known) targets of state-sponsored Russian hackers. Likewise, the Mueller report had good data on Bitcoin accounts. I’m sure if I looked at all the evidence, I’d have more doubts, but at the same time, of the politicized hacking incidents out there, this seems to have the best (public) support for the conclusion.

What’s the conspiracy?

The basis of the conspiracy is that the DNC hack was actually an inside job. Some former intelligence officials lead by Bill Binney claim they looked at some data and found that the files were copied “locally” instead of across the Internet, and therefore, it was an insider who did it and not a remote hacker.

I debunk the claim here, but the short explanation is: of course the files were copied “locally”, the hacker was inside the network. In my long experience investigating hacker intrusions, and performing them myself, I know this is how it’s normally done. I mention my own experience because I’m technical and know these things, in contrast with Bill Binney and those other intelligence officials who have no experience with such things. He sounds impressive that he’s formerly of the NSA, but he was a mid-level manager in charge of budgets. Binney has never performed a data breach investigation, has never performed a pentest.

There’s other parts to the conspiracy. In the middle of all this, a DNC staffer was murdered on the street, possibley due to a mugging. Naturally this gets included as part of the conspiracy, this guy (“Seth Rich”) must’ve been the “insider” in this attack, and must’ve been murdered to cover it up.

What about this “server”?

Conspiracy theorists have become obsessed with servers. The anti-Trump crowd believes in a conspiracy involving a server in Trump Tower secretly communicating with a bank in Russia (which I’ve debunked multiple times). There’s also Hillary’s email server.

In this case, there’s not really any particular server, but that the servers in general were mishandled. They postulate that one of them must exist that explains the “Truth” of what really happened, and that it’s being covered up.

The pro-Trump conspiracy believes that it’s illegitimate that CrowdStrike investigated the DNC hack and not the FBI — that the FBI only got involved after CrowdStrike, and relied mostly on CrowdStrike’s investigations. This is bogus. CrowdStrike has way more competency here than the FBI, and access to more data. It’s not that the FBI it useless, but if you were a victim of a nation-state hack, you’d want CrowdStrike leading the investigation and not the FBI.

The pro-Trump crowd believes the FBI should’ve physically grabbed the servers. That’s not how such investigations work. If you are a criminal, yes they take your computer. If you are the victim, then no — it just victimizes you twice, once when the criminal steals your data, and a second time when the FBI steals your computer.

Instead, servers are “imaged”, they take a copy of what was in memory and on the disk. There’s nothing investigator want more than an image. Indeed, when they take them from suspected criminals, it’s a subtle form of punishment and abuse (like “civil asset forfeiture”) rather than a specific need.

What’s the Ukraine connection?

Because Ukraine is the ground zero in the world’s cyberwar.

Russia officially occupies one part of the Ukraine (the Crimea) and unofficially occupies the eastern part of the country with strong Russian speaking minorities. By “unofficially” it means that it’s largely a private occupation with Russian oligarchs buying weapons for separatists in those areas. It’s a big debate about how much Putin and the Russian government is involved.

Part of this armed conflict is the cyber conflict. Russian hackers are thoroughly hacking Ukraine. The notPetya virus/worm that caused billions of dollars of damage a couple years ago is just one part of this conflict.

There is occasional reporting of this in the mainstream media, such as noPetya or when Russian hackers successfully hacked the Ukraine power grid, but if anything, the whole conflict is underreported. Russia’s cyberwar with Ukraine is the most important thing going in our field at the moment.

As such, all major cybersecurity firms are involved in working with Ukraine. That includes CrowdStrike. In particular, they came out with a report about Russians hacking an Android app used to control Ukraine artillery.

Like many such reports, it appears to have had errors and to have overstated its case, and CrowdStrike got lots of criticism. This feeds into the conspiracy theories.

In any case, this means that CrowdStrike (like every big company) has ties to Ukraine that’ll get pulled into any conspiracy theory.

Who is this rich Ukrainian, and do they own CrowdStrike?

CrowdStrike is public company with a long list of American venture capitalists, including Google’s investment arm. Nobody believes there’s a single rich person who owns it.

But of course conspiracy theorists believe in conspiracies, that it’s all a front, and that there’s somebody secretly behind the scenes controlling what’s really going on. I point this out because I’ve read numerous articles trying to debunk this by proving who really does own CrowdStrike. This misses the point: it’s not about who actually does own the company, but who is secretly behind the scenes.

Both the founder of CrowdStrike’s cofounder Dmitri Alperovitch and Ukraine oligarch Victor Pinchuk are involved with a think tank know as the Atlantic Council. As far as I can, that appears as much ties in the conspiracy that anybody can come up with.

Who are “they” and “everyone”?

When Trump talks about such things, he frequently cites unknown persons, “they say” or “everyone here is talking about”:

Trump surrounds himself with yes-men, judged by their loyalty rather than their competence. He’s not at the forefront of spouting conspiracy theories of his own, but he certainly rewards others for their conspiracy theories — as long as they are on his side.

I mention this because, for example, Binney’s evidence of the “insider” is wholly and obviously bogus, but there’s no fighting it. It’s a rock solid part of Trump’s narrative and nothing I can say will ever convince conspiracy theorists otherwise.

If Trump gets impeached, or if he loses the 2020 election, it’ll be because illegitimate forces are out to get him. And he knows this because “everyone” around him agrees with him. Because if you disagreed, you wouldn’t be around him.

That outright conspiracy theories go all the way to the top is extremely troublesome.

Conclusion

The tl;dr is that CrowdStrike investigated the DNC hacking incident, Trump disagrees with their conclusion that Russia was responsible, and thus has a thing for CrowdStrike. Everything Trump hates is involved in the Grand Conspiracy against him. It’s really no more complicated than that.

Thread on the OSI model is a lie

Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/08/thread-on-osi-model-is-lie.html

I had a Twitter thread on the OSI model. Below it’s compiled into one blogpost

Yea, I’ve got 3 hours to kill here in this airport lounge waiting for the next leg of my flight, so let’s discuss the “OSI Model”. There’s no such thing. What they taught you is a lie, and they knew it was a lie, and they didn’t care, because they are jerks.
You know what REALLY happened when the kid pointed out the king was wearing no clothes? The kid was punished. Nobody cared. And the king went on wearing the same thing, which everyone agreed was made from the finest of cloth.
The OSI Model was created by international standards organization for an alternative internet that was too complicated to ever work, and which never worked, and which never came to pass.
Sure, when they created the OSI Model, the Internet layered model already existed, so they made sure to include today’s Internet as part of their model. But the focus and intent of the OSI’s efforts was on dumb networking concepts that worked differently from the Internet.
OSI wanted a “connection-oriented network layer”, one that worked like the telephone system, where every switch in between the ends knows about the connection. The Internet is based on a “connectionless network layer”.
Likewise, the big standards bodies wanted a slightly different way of how Ethernet should work, with an LLC layer on top of Ethernet. That never came to pass. Well, an LLC layer exists in WiFi packets, but as a vestigial stub like an appendix.
So layers 1 – 4 are at least a semblance of reality, incorporating Ethernet and TCP/IP, but it’s layers 5 – 6 where is goes off the rails. There’s no Session or Presentation Layer in modern networks.
Sure, the concepts exist, but not as layers, and not with the functionality those layers envisioned.
For example, the Session Layer wanted “synchronization points” to synchronize transactions. Their model never worked, and how synchronization happens on the Internet is vastly more complex, with pretty much everybody designing their own method.
For example, how Google does Paxos synchronization at scale is a big reason for their success. It’s an incredibly tough problem for which it’s impractical to create a standard. In any case, you wouldn’t want it as a “layer”.
Sure, HTTP has “session cookies” and SSL has a “session” concept, but that doesn’t make these “session layer” protocols.
The OSI Presentation Layer (layer 6) is even more stupider. It was based on dumb terminals connected to mainframes. It was laughably out-of-date before it was even created. Back then, terminals needed to negotiate control codes and character sets.
It’s not simply “dumb terminals”, it’s the fact most everyone was still stuck on the concept that computer networks were for human-computer communications, rather than computer-computer communications.
The OSI Model they teach is a retconned (retroactive continuity) one that just teaches the TCP/IP model and calls it the OSI Model, and does major handwaving over the non-existent Session and Presentation layers.
Intermission: As a side not to this thread, let me answer this. It’s because Netscape invented SSL, and Microsoft hated Netscape, so forced the standards body to change the name to TLS.

Sure, HTTP has “session cookies” and SSL has a “session” concept, but that doesn’t make these “session layer” protocols.
I’ve never understand why the “Secure *Socket Layer*” was renamed to “*Transport Layer* Security” in the new version published in 1999, yet most people still seem to refer to it as “SSL” (including Qualsys!)

It’s the same reason the French insist that “ISO” stands for “International Organization for Standardization”. I don’t put up with that nonsense, because I’m a troll.

You know what REALLY happened when the kid pointed out the king was wearing no clothes? The kid was punished. Nobody cared. And the king went on wearing the same thing, which everyone agreed was made from the finest of cloth.
The OSI Model was created by international standards organization for an alternative internet that was too complicated to ever work, and which never worked, and which never came to pass.

So back to our story. I suppose “OSI Model” can be justified if if everyone taught the same thing, if it were all based on the same specification. But it isn’t. Everyone makes up their own version, like which where to put SSL. (The correct answer is “Transport Layer”, btw).
As for the question “in which layer does encryption belong?”, the correct answer is “all the layers”. And then some.
So this is a myth. The DoD mandated GOSIP, it never mandated TCP/IP. I mean, they did mandate working systems. Since GOSIP never worked, and TCP//IP was the only working alternative, that sorta mandated it.

Yea, I’ve got 3 hours to kill here in this airport lounge waiting for the next leg of my flight, so let’s discuss the “OSI Model”. There’s no such thing. What they taught you is a lie, and they knew it was a lie, and they didn’t care, because they are jerks.
Oh, but there was the whole GOSSIP stack that implemented it and was taken seriously – ’till the DoD mandated TCP/IP .

What happened is that shipping systems came with an OSI stack that sometimes would get communication between two systems if they were the same vendor, but also TCP/IP for when things had to work.
You still see OSI nonsense in industrial control systems (port 102 = OSI Transport Layer on top of TCP). That’s because regulatory bodies are stronger in those areas, able to force bad ideas on people no matter how unworkable.
Morons call for “realpolitik”, that we could solve problems if only government had the will to overcome objections. But a government with enough political power to overcome objections is how we get bad ideas like OSI.
My first time pentesting a powerplant was sniffing traffic, finding TCP/102 …. and within an hour having an ASN.1 buffer overflow in a critical protocol that crossed firewalls.
So let’s discuss X.509 and LDAP, which both technically descend from the OSI standards bodies. DAP was a typical bloated, unimplementable OSI protocol, so that’s why we have “Lightweight DAP” or “LDAP”.

My first time pentesting a powerplant was sniffing traffic, finding TCP/102 …. and within an hour having an ASN.1 buffer overflow in a critical protocol that crossed firewalls.
This rant seems incomplete with some mention of The Directory and its surviving vestiges like X.509 and LDAP.

X.509 was a typical OSI standard written to serve the interests of big vendors instead of customers, who wanted to charge lots of money for certificates, hindering the adoption of encryption until LetsEncrypt put a stop to that nonsense TWO DECADES later.
You millennials have no concept how freakin’ long two decades is, and how that’s an unreasonable amount of time to not have free certificates for websites.
Here’s what you post-millenials/Gen-Z/whatever need to do. When you are in class and they start teaching the OSI model, stand up and shout “THIS IS BS” and walk out of the room. Better yet, organize your classmates to follow you.
“What is the OSI Model?” It’s the fact that the local network is independent from the Internet, and the Internet is independent of the applications that run on top of it. It’s the fact you can swap WiFi for Ethernet, or IPv6 for IPv4, or Signal for WatsApp

Yea, I’ve got 3 hours to kill here in this airport lounge waiting for the next leg of my flight, so let’s discuss the “OSI Model”. There’s no such thing. What they taught you is a lie, and they knew it was a lie, and they didn’t care, because they are jerks.
I did not understand everything so please correct me if I’m wrong – is OSI model a bunch of poorly separated responsibilities?

When we eventually move to IPv7, we won’t need to upgrade Ethernet switches. Ethernet and WiFi have no clue what are doing on top of them. Ancient alternatives like XNS or Novel or NetBEUI also work fine on the latest 802.11ax/WiFi6 router you just bought.
There are a few more subdivisions. Layer 1 (Physical) gets the raw bits transmitted on the wire (or into air). Layer 2 (Link) gets packets across your local network to the next router. Layer 3 (IPv4/IPv6) gets packets from one end of the Internet to the other.
Layer 4 (TCP/UDP) gets packets to one of many apps running on your machine to one of many apps running on the server. It may also retransmit lost packets. Layer 7 consists of a bunch of different protocols that services those apps.
No, the OSI Model doesn’t have its place. You can teach how layered networking works without teaching the OSI version. The OSI version messes it up rather than clarifying things.

Layers existed before the OSI Model. They didn’t invent the idea. They coopted and changed the idea. When you redefine it back again, you only confuse students. They can pass your test, but no some other test like the CISSP, because the answers won’t match. Because it’s made up.