All posts by Robert Graham

Review: Dune (2021)

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/10/review-dune.html

One of the most important classic sci-fi stories is the book “Dune” from Frank Herbert. It was recently made into a movie. I thought I’d write a quick review.

The summary is this: just read the book. It’s a classic for a good reason, and you’ll be missing a lot by not reading it.

But the movie Dune (2021) movie is very good. The most important thing to know is see it in IMAX. IMAX is this huge screen technology that partly wraps around the viewer, and accompanied by huge speakers that overwhelm you with sound. If you watch it in some other format, what was visually stunning becomes merely very pretty.

This is Villeneuve’s trademark, which you can see in his other works, like his sequel to Bladerunner. The purpose is to marvel at the visuals in every scene. The story telling is just enough to hold the visuals together. I mean, he also seems to do a good job with the story telling, but it’s just not the reason to go see the movie. (I can’t tell — I’ve read the book, so see the story differently than those of you who haven’t).

Beyond the story and visuals, many of the actor’s performances were phenomenal. Javier Bardem’s “Stilgar” character steals his scenes. Stellan Skarsgård exudes evil. The two character actors playing the mentats were each perfect. I found the lead character (Timothée Chalamet) a bit annoying, but simply because he is at this point in the story.

Villeneuve’s splits the book into two parts. This movie is only the first part. This presents a problem, because up until this point, the main character is just responding to events, not the hero who yet drives the events. It doesn’t fit into the traditional Hollywood accounting model. I really want to see the second film even if the first part, released in the post-pandemic turmoil of the movie industry, doesn’t perform well at the box office.

In short, if you haven’t read the books, I’m not sure how well you’ll follow the storytelling. But the visuals (seen at IMAX scale) and the characters are so great that I’m pretty sure most people will enjoy the movie. And go see it on IMAX in order to get the second movie made!!

Fact check: that "forensics" of the Mesa image is crazy

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/10/fact-check-that-forensics-of-mesa-image.html

Tina Peters, the elections clerk from Mesa County (Colorado) went rogue, creating a “disk-image” of the election server, and posting that image to the public Internet. Conspiracy theorists have been analyzing the disk-image trying to find anomalies supporting their conspiracy-theories. A recent example is this “forensics” report. In this blogpost, I debunk that report.

I suppose calling somebody a “conspiracy theorist” is insulting, but there’s three objective ways we can identify them as such.

The first is when they use the logic “everything we can’t explain is proof of the conspiracy“. In other words, since there’s no other rational explanation, the only remaining explanation is the conspiracy-theory. But there can be other possible explanations — just ones unknown to the person because they aren’t smart enough to understand them. We see that here: the person writing this report doesn’t understand some basic concepts, like “airgapped” networks.

This leads to the second way to recognize a conspiracy-theory, when it demands this one thing that’ll clear things up. Here, it’s demanding that a manual audit/recount of Mesa County be performed. But it won’t satisfy them. The Maricopa audit in neighboring Colorado, whose recount found no fraud, didn’t clear anything up — it just found more anomalies demanding more explanation. It’s like Obama’s birth certificate. The reason he ignored demands to show it was that first, there was no serious question (even if born in Kenya, he’d still be a natural born citizen — just like how Cruz was born in Canada and McCain in Panama), and second, showing the birth certificate wouldn’t change anything at all, as they’d just claim it was fake. There is no possibility of showing a birth certificate that can be proven isn’t fake.

The third way to objectively identify a conspiracy theory is when they repeat objectively crazy things. In this case, they keep demanding that the 2020 election be “decertified”. That’s not a thing. There is no regulation or law where that can happen. The most you can hope for is to use this information to prosecute the fraudster, prosecute the elections clerk who didn’t follow procedure, or convince legislators to change the rules for the next election. But there’s just no way to change the results of the last election even if wide spread fraud is now proven.

The document makes 6 individual claims. Let’s debunk them one-by-one.


#1 Data Integrity Violation

The report tracks some logs on how some votes were counted. It concludes:

If the reasons behind these findings cannot be adequately explained, then the county’s election results are indeterminate and must be decertified.

This neatly demonstrates two conditions I cited above. The analyst can’t explain the anomaly not because something bad happened, but because they don’t understand how Dominion’s voting software works. This demand for an explanation is a common attribute of conspiracy theories — the ignorant keep finding things they don’t understand and demand somebody else explain them.

Secondly, there’s the claim that the election results must be “decertified”. It’s something that Trump and his supporters believe is a thing, that somehow the courts will overturn the past election and reinstate Trump. This isn’t a rational claim. It’s not how the courts or the law works or the Constitution works.


#2 Intentional purging of Log Files

This is the issue that convinced Tina Peters to go rogue, that the normal Dominion software update gets rid of all the old system-log files. She leaked two disk-images, before and after the update, to show the disappearance of system-logs. She believes this violates the law demanding the “election records” be preserved. She claims because of this, the election can’t be audited.

Again, we are in crazy territory where they claim things that aren’t true. System-logs aren’t considered election records by any law or regulation. Moreover, they can’t be used to “audit” an election.

Currently, no state/county anywhere treats system-logs as election records (since they can’t be used for “audits”). Maybe this should be different. Maybe you can create a lawsuit where a judge rules that in future elections they must be treated as election records. Maybe you can convince legislatures to pass laws saying system-logs must be preserved. It’s not crazy to say this should be different in the future, it’s just crazy to say that past system-logs were covered under the rules.

And if you did change the rules, the way to preserve them wouldn’t be to let them sit on the C: boot-drive until they eventually rot and disappear (which will eventually happen no matter what). Instead, the process to preserve them would be to copy them elsewhere. The way Dominion works is that all election records that need to be preserved are copied over to the D: data drive.

Which means, by the way, that this entire forensics report is bogus. The Mesa disk image was only of the C: boot-drive, not of the D: data drive. Thus, it’s unable to say which records/logs were preserved or not. Everyone knows that system-logs probably weren’t, because they aren’t auditable election records, so you can still make the claim “system-logs weren’t preserved”. It’s just that you couldn’t make that claim based on a forensics of the C: boot-drive. Again, we are in crazy statements territory that identify something as a conspiracy-theory, weird claims about how reality works.

System-logs cannot be used to audit the vote. That’s confusing the word “audit” with “forensics”. The word “audit” implies you are looking for a definitive result, like whether the vote count was correct, or whether all procedures were followed. Forensics of system-logs can’t tell you that. Instead, they can only lead to indeterminate results.

That’s what you see here. This “forensics” report cannot make any definitive statement based upon the logs. It can find plenty of anomalies, meaning things the forensics investigator can’t understand. But none of that is positive proof of anything. If a hacker had flipped votes on this system, it’s unlikely we would have seen evidence in the log.

#3 Evidence of network connection

The report claims the computer was connected to a network. Of course this is true — it’s not a problem. The network was the one shown in the diagram below:

Specifically, this Mesa image was of the machine labeled “EMS Server” in the above diagram. From my forensics of the network logs, I can see that there are other computers on this network:

  1. Four ICC workstations (named ICC01 through ICC04)
  2. Two Adjudication Workstations (named ADJCLIENT01 and ADJCLINET03, I don’t know what happened to number 2).
  3. Two EMS Workstations (named EMSCLIENT01 and EMSCLIENT02).
  4. A printer, model Dell E310dw.
The word “airgapped” doesn’t mean the EMS Server is airgapped from any network, but that this entire little network is airgapped from anything else. The security of this network is physical security, the fact that nobody can enter the room who isn’t authorized.
I did my own forensics on the Mesa image and could find none of the normal signs that the server accessed the Internet, and pretty good evidence that most of the time, it was unconnected (it gets mad when it can’t find the Internet and produces logs stating this). This doesn’t mean I proved conclusively no Internet connection was ever made. It’s possible that somebody will find some new thing in that image that shows an Internet connection. It’s just that currently, there’s no reason to believe the “airgap” guarantee of security was violated.
The claimed evidence about the “Microsoft Report Server” is wrong.

#4 Lack of Software Updates
This is just stupid. The cybersecurity community does have this weird fetish demanding that every software update be applied immediately, but there’s good reasons why they aren’t, and ways of mitigating the security risk when they can’t be applied.
Software updates sometimes break things. In sensitive environments where computers must be absolutely predictable, they aren’t applied. This includes live financial systems, medical equipment, and industrial control systems.
This also includes elections. It’s simply not acceptable canceling or delaying an election because a software update broke the computer.
This is why Dominion does what they call a “Trusted Build” process that wipes out the boot-drive (deleting system-logs). To update software, they build an entire new boot image with all the software in a tested, known state. They then apply that boot disk image to all the county machines, which replaces everything on the C: boot-drive with a new version of Windows and all the software. This leaves the D: data drive untouched, where the records are preserved.
If you didn’t do things this way, then sometimes elections will fail.
This is also why having an “airgapped” network is important. The voting machines aren’t going to have software updates regularly applied, so they need to be protected. Firewalls would also be another mitigation strategy.

#5 Existence of SQL Server Management Studio.
This is just a normal part of having an SQL server installed.
Yes, in theory it would make it easy for somebody to change records in the database. But at the same time, such a thing is pretty easy even without SSMS installed. One way is command-line scripts.
#6 Referential Integrity
This “referential integrity” is a reliability concern, not an anti-hacking measure. It just means hackers would need only an extra step if they wanted to delete or change records.
Conclusion

Evidence is something that the expert understands. It’s something they can show, explain, and defend against challengers.
This report contained none of that. It contained instead anomalies the writer couldn’t explain.
Note that this doesn’t mean they weren’t an expert. Obviously, they needed enough expertise to get as far as they did. It’s just a consequence of conspiracy-theories. When searching for proof of your conspiracy-theory when there is none, it means going off into the weeds past your are of expertise.
Give that forensics image to any expert, and they’ll find anomalies they can’t explain. That includes me, I’ve posted some of them to Twitter and had other experts explain them to me. The difference is that I attributed the lack of an explanation to my own ignorance, not a conspiracy.
At some point, we have to call out conspiracy-theories for what they are. This isn’t defending the integrity of elections. If it were, it’d be proposing solutions for future elections. Instead, it’s an attack on the integrity of elections, fighting the peaceful transfer of power by unfounded conspiracy-theory claims.
And we can say this objectively. As I stated above, there’s three objective tests. These are:
  • Anomalies that can’t be explained are claimed to be evidence — when in fact they come from simple ignorance.
  • Demands that something needs explaining, when it really doesn’t, and which won’t satisfy them anyway.
  • Statements of a world view (like that the election can be “decertified” or that system-logs are “election records”) that nobody agrees with.

100 terabyte home NAS

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/10/100-terabyte-home-nas.html

So, as a nerd, let’s say you need 100 terabytes of home storage. What do you do?

My solution would be a commercial NAS RAID, like from Synology, QNAP, or Asustor. I’m a nerd, and I have setup my own Linux systems with RAID, but I’d rather get a commercial product. When a disk fails, and a disk will always eventually fail, then I want something that will loudly beep at me and make it easy to replace the drive and repair the RAID.

Some choices you have are:

  • vendor (Synology, QNAP, and Asustor are the vendors I know and trust the most)
  • number of bays (you want 8 to 12)
  • redundancy (you want at least 2 if not 3 disks)
  • filesystem (btrfs or ZFS) [not btrfs-raid builtin, but btrfs on top of RAID]
  • drives (NAS optimized between $20/tb and $30/tb)
  • networking (at least 2-gbps bonded, but box probably can’t use all of 10gbps)
  • backup (big external USB drives)

The products I link above all have at least 8 drive bays. When you google “NAS”, you’ll get a list of smaller products. You don’t want them. You want somewhere between 8 and 12 drives.

The reason is that you want two-drive redundancy like RAID6 or RAIDZ2, meaning two additional drives. Everyone tells you one-disk redundancy (like RAID5) is enough, they are wrong. It’s just legacy thinking, because it was sufficient in the past when drives were small. Disks are so big nowadays that you really need two-drive redundancy. If you have a 4-bay unit, then half the drives are used for redundancy. If you have a 12-bay unit, then only 2 out of the 12 drives are being used for redundancy.

The next decision is the filesystem. There’s only two choices, btrfs and ZFS. The reason is that they both healing and snapshots. Note btrfs means btrfs-on-RAID6, not btrfs-RAID, which is broken. In other words, btrfs contains its own RAID feature that you don’t want to use.

Over long periods of time, errors creep into the file system. You want to scrub the data occasionally. This means reading the entire filesystem, checksuming the files, and repairing them if there’s a problem. That requires a filesystem that checksums each block of data.

Another thing you want snapshots to guard against things like ransomware. This means you mark the files you want to keep, and even if a workstation attempts to change or delete the file, it’ll still be held on the disk.

QNAP uses ZFS while others like Synology and Asustor use btrfs. I really don’t know which is better.

It’s cheaper to buy the NAS diskless then add your own disk drives. If you can’t do this, then you’ll be helpless when a drive fails and needs to be replaced.

Drives cost between $20/tb and $30/tb right now. This recent article has a good buying guide. You probably want to get a NAS optimized hard drive. You probably want to double-check that it’s CMR instead of SMR — SMR is “shingled” vs. “conventional” magnetic recording. SMR is bad. There’s only three hard drive makers (Seagate, Western Digital, and Toshiba), so there’s not a big selection.

Working with such large data sets over 1-gbps is painful. These units allow 802.3ad link aggregation as well as faster Ethernet. Some have 10gbe built-in, others allow a PCIe adapter to be plugged in.

However, due to the overhead of spinning disks, you are unlikely to get 10gbps speeds. I mention this because 10gbps copper Ethernet sucks, so is not necessarily a buying criteria. You may prefer multigig/NBASE-T that only does 5gbps with relaxed cabling requirements and lower power consumption.

This means that your NAS decision is going to be made with your home networking decision. I use a couple of these multigig switches as something that doesn’t cost too much for home networking.

Even though RAID is pretty darn reliable, you still need a backup solution. The way I do this is wither external USB hard drives. I schedule the NAS to backup to those drives automatically. As a home user, tapes aren’t an effective solution, so you are stuck with USB drives.

In the end, this means that your total storage costs, with the NAS server, the drives, and the backup drives, is going to cost you 3x the price of the raw storage. Spinning drives fail often. If you plan on keeping your data around for the next decade, there’s no way to do this without 3x the cost for storage.

I choose Synology because I have the most familiarity with the software, and its software gets the best reviews. But QNAP and Asustor also have great reputations. 

Note that I’ve made the assumption here that you’ll want “desktop NAS” solutions. There are also rackmount solutions available.

Check: that Republican audit of Maricopa

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/09/check-that-republican-audit-of-maricopa.html

Author: Robert Graham (@erratarob)

Later today (Friday, September 24, 2021), Republican auditors release their final report on what they found with elections in Maricopa county. Draft copies of the report have already circulated online. In this blogpost, I write up my comments on the cybersecurity portions of their draft.

https://arizonaagenda.substack.com/p/we-got-the-senate-audit-report

The three main problems are:

  • They misapply cybersecurity principles that are meaningful for normal networks, but which don’t really apply to the “air gapped” networks we see here.
  • They make some errors about technology, especially networking.
  • They are overstretching themselves to find dirt, claiming the things they don’t understand are evidence of something bad.

In the parts below, I pick apart individual pieces from that document to demonstrate these criticisms. I focus on section 7, the cybersecurity section, and ignore the other parts of the document, where others are more qualified than I to opine.

In short, when corrected, section 7 is nearly empty of any content.

7.5.2.1.1 Software and Patch Management, part 1

They claim Dominion is defective at one of the best-known cyber-security issues: applying patches.

It’s not true. The systems are “air gapped”, disconnected from the typical sort of threat that exploits unpatched systems. The primary security of the system is physical. Frequent patching isn’t expected.

This is a standard in other industries with hard reliability constraints, like industrial or medical. Patches in those systems can destabilize computers and kill people, so these industries are risk averse and resist applying them. They prefer to mitigate the threat in other ways, such as with firewalls and air gaps.

Yes, this approach is controversial. There are some in the cybersecurity community who use lack of patches as a bludgeon with which to bully any who don’t apply every patch immediately. But this is because patching is more a political issue than a technical one. In the real, non-political world we live in, most things don’t get immediately patched all the time.

7.5.2.1.1 Software and Patch Management, part 2

The auditors claim new software executables were applied to the system, despite the rules against new software being applied. This isn’t necessarily true.

There are many reasons why Windows may create new software executables even when no new software is added. One reason is “Features on Demand” or FOD. You’ll see new executables appear in C:\Windows\WinSxS for these. Another reason is their .NET language, which causes binary x86 executables to be created from bytecode. You’ll see this in the C:\Windows\assembly directory.

The auditors simply counted the number of new executables, with no indication which category they fell into. Maybe they are right, maybe new software was installed or old software updated. It’s just that their mere counting of executable files doesn’t show understanding of these differences.

7.5.2.1.2 Log Management

The auditors claim that a central log management system should be used.

This obviously wouldn’t apply to “air gapped” systems, because it would need a connection to an external network.

Dominion already designates their EMSERVER as the central log repository for their little air gapped network. Important files from C: are copied to D:, a RAID10 drive. This is a perfectly adequate solution, adding yet another computer to their little network would be overkill, and add as many security problems as it solved.

One could argue more Windows logs need to be preserved, but that would simply mean archiving the from the C: drive onto the D: drive, not that you need to connect to the Internet to centrally log files.

7.5.2.1.3 Credential Management

Like the other sections, this claim is out of place given the airgapped nature of the network.

Dominion simply uses “role based security” instead of normal user accounts. It’s a well known technique, and considered very appropriate for this sort of environment.

The auditors claim account passwords must “be changed every 90 days”. This is a well-know fallacy in cybersecurity. It took years to get NIST to remove it from their recommendations. If CISA still has it in their recommendations for election systems, then CISA is wrong.

Ideally, accounts wouldn’t be created until they were needed. In practice, system administrators aren’t available (again, it’s an airgapped system, so no remote administration). Dominions alternative is to create the accounts ahead of time, suc has “adjuser09”, waiting for the 9th person you hire that might use that account.

They are all given the same default password to start, like “Arizona2019!!!”. Some customers choose to change the default password, but obviously Maricopa did not. This is weak – but not a big deal, since the primary security is from controlling physical access.

7.5.2.1.4 Lack of Baseline for Host and Network Activity

They claim sort of baselining should be done. This is absurd. Baselines are always problematic, but would be especially so in this case.

The theory of baselines is that a networks traffic is somewhat predictable on a day-to-day basis. This obviously doesn’t apply to elections systems, which are highly variable day-to-day, especially on election day.

Baselining is the sort of thing you do with a dedicated threat hunting team. It’s incredibly inappropriate for a small installation like this.

7.5.3.1.1 Network Related Data

The auditors asked for an unreasonable access to network data, in the worst way possible, triggering the refusal to hand it over. They didn’t ask for reasonable data. They blame Maricopa Count for the conflict, but it’s really themselves who are to blame.

A reasonable request would take the MAC addresses from the election machines and ask for any matching records the Maricopa might have in their Splunk, DHCP, or ARP logs. Matches shouldn’t be found, but if they were, the auditors should then ask for flow data for the associated IP addresses.

They are correct in identifying this as a very important issue. Dominion security depends upon an airgap. If auditors find a netowrk connection, it’s bad. It’s not catastrophic, and sometimes machines are disconnected from one network and attached to a network during other times than the election. But this would very much be a useful part of a report – if only they had made a reasonable request that didn’t demand Maricopa spend their entire yearly budget to comply.

7.5.3.1.? Other Devices Connected to the Election Network

The auditors complain they weren’t given access to the router identified by 192.168.100.1.

It probably doesn’t exist.

Routers aren’t needed by devices that are on the same local Ethernet. They wouldn’t exist on a single-segment air gapped network. But typical operating-system configuration demands one be configured anyway, so it’s common to put in a dummy router address even if it’s unused.

If you see messages like this one in the logs, it means the router wasn’t there:

The auditors are right in identifying this as an important issue. If there were such a router, then this would cast doubt whether the network was “airgapped”.

Note that if such a router did exist, it would almost certainly be a NAT. This would still offer some firewall protection, just not as strong as an air gap.

7.5.4 Anonymous Logins

They see something in the security logs they don’t understand, and blame Maricopa’s lack of network data (“the routers”) for their inability to explain it.

This is an extraordinarily inappropriate claim, based not on expert understanding of what they see in the logs, but complete ignorance. There’s no reason to believe that getting access to Maricopa Count network logs would explain what’s going on here.

This demonstrates they are on a phishing expedition, and that everything they see that they can’t explain is used as evidence of a conspiracy, either of Maricopa to withhold data, or of election fraud.

The Dominion suite of applications and services is oddly constructed and will produce anomalies. Comparing against a general Windows server not running Dominion’s suite is meaningless.

7.5.5 Dual Boot System Discovered

The auditors claim something about “dual-homed hosts” or “jump-boxes”. That’s not how these terms are normally used. These terms normally refer to a box with access to two separate networks, not a box with two separate operating systems.

This requires no nefarious explanation. This is commonly seen in corporate networks, either because somebody simply added a new drive to re-install the operating-system, or repurposed an old drive from another system as a data drive, and simply forgot to wipe it. The BIOS points to one they intend to boot from and ignore the fact that the other can also boot.

There are endless non-nefarious explanations for what is seen here that doesn’t require a nefarious one. It’s not even clear its a failure of their build process, which focuses on what’s on the boot drive and not what’s on other drives in the system.

7.5.6 EMS Operating System Logs Not Preserved

It is true the EMS operating-system logs are not preserved (well, generally not preserved). By this I refer to the generic Windows logs, the same logs that your own Windows desktop keeps.

The auditors falsely claim that this violates the law. This is false. The “electron records” laws don’t cover the operating-system. The laws instead are intended to preserve the records of the election software running on top of the operating-system, not those of the operating-system itself.

This issue has long been known. You don’t need an auditor’s report to tell you that these logs aren’t generally preserved – everyone has known this for a long time, including those who certified Dominion.

The subtext of this claim is the continued argument by Republicans that the fact they can’t find evidence for 2020 election fraud is because key data is missing. That’s the argument of Tina Peters, the former clerk of a county in the neighboring state of Colorado, who claims their elections cannot be audited because they don’t have the Windows operating-system logs.

It’s not true. System logs are as likely to cause confusion, as they do above with the “anonymous logins” issue. They are unlikely to provide proof of votes being flipped in a hack. If there was massive fraud, as detected by recounts of paper ballots, I’d certainly want such system logs to search for how it happened. But I wouldn’t use such logs in order to audit the vote.

Note that the description of “deleting” log entries by overfilling the logs is wrong. If it were important to preserve such logs, then they would be copied right after the election. They wouldn’t be left to rot on the boot drive for months afterwards.

As a forensics guy, I would certainly support the idea that Dominion should both enable more logs and preserve them after each election. They don’t require excessive storage and can be saved automatically in the last phase of an election. But their lack really isn’t all that important, they are mostly just full of junk.

Conclusion

We live in a pluralistic democracy, meaning there are many centers of power, each competing with each other. It’s inherently valid for one side to question and challenge the other side. But this can go too far, to the point where you are challenging the stability of our republic.

The Republican party is split. Some are upholding that principle of pluralism, wanting to make sure future elections are secure and fair. Others are attacking that principle, challenging the peaceful transfer of power in the last election with baseless accusations of fraud.

This split is seen in Arizona, where Republicans have demanded an audit by highly partisan auditors. An early draft of their report straddles that split, containing some reasonable attempt to create recommendations for future elections, while simultaneous providing fodder for the other side to believe the last election was stolen.

A common problem with auditors is that when they can’t find the clear evidence they were looking for, the fill their reports with things they don’t understand. I think I see that here. The auditors make technical errors in ways that question their competence, but that’s likely not true. Instead, they kept searching past where they were strong into areas where they were weak, looking for as much dirt as possible. Thus, in this report, we see where they are technically weak.

Trumpists, meaning those attacking the peaceful transfer of power with baseless accusations of fraud, will certainly use this report to champion their cause, despite the headline portion that confirms the vote count. But for the rest of us, we should welcome this report. Elections do need to be fixed, and while it’s unlikely we’ll fix them in the ways suggested in this report, it will add visibility into the process which we can use to debate improvements.

This blogpost is only a first draft. While the technical bits in section 7 look fairly straightforward to me, I’m guessing that people who don’t understand them will come up with weird conspiracy-theories about them. Thus, I’m guessing I’ll have to write another blogpost in a week debunking some of the crazier ideas.

That Alfa-Trump Sussman indictment

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/09/that-alfa-trump-sussman-indictment.html

Five years ago, online magazine Slate broke a story about how DNS packets showed secret communications between Alfa Bank in Russia and the Trump Organization, proving a link that Trump denied. I was the only prominent tech expert that debunked this as just a conspiracy-theory[*][*][*].

Last week, I was vindicated by the indictment of a lawyer involved, a Michael Sussman. It tells a story of where this data came from, and some problems with it.

But we should first avoid reading too much into this indictment. It cherry picks data supporting its argument while excluding anything that disagrees with it. We see chat messages expressing doubt in the DNS data. If chat messages existed expressing confidence in the data, we wouldn’t see them in the indictment.

In addition, the indictment tries to make strong ties to the Hillary campaign and the Steele Dossier, but ultimately, it’s weak. It looks to me like an outsider trying to ingratiated themselves with the Hillary campaign rather than there being part of a grand Clinton-lead conspiracy against Trump.

With these caveats, we do see some important things about where the data came from.

We see how Tech-Executive-1 used his position at cyber-security companies to search private data (namely, private DNS logs) to search for anything that might link Trump to somebody nefarious, including Russian banks. In other words, a link between Trump and Alfa bank wasn’t something they accidentally found, it was one of the many thousands of links they looked for.

Such a technique has been long known as a problem in science. If you cast the net wide enough, you are sure to find things that would otherwise be statistically unlikely. In other words, if you do hundreds of tests of hydroxychloroquine or invermectin on Covid-19, you are sure to find results that are so statistically unlikely that they wouldn’t happen more than 1% of the time.

If you search world-wide DNS logs, you are certain to find weird anomalies that you can’t explain. Unexplained computer anomalies happen all the time, as every user of computers can tell you.

We’ve seen from the start that the data was highly manipulated. It’s likely that the data is real, that the DNS requests actually happened, but at the same time, it’s been stripped of everything that might cast doubt on the data. In this indictment we see why: before the data was found the purpose was to smear Trump. The finders of the data don’t want people to come to the best explanation, they want only explainations that hurt Trump.

Trump had no control over the domain in question, trump-email.com. Instead, it was created by a hotel marketing firm they hired, Cendyne. It’s Cendyne who put Trump’s name in the domain. A broader collection of DNS information including Cendyne’s other clients would show whether this was normal or not.

In other words, a possible explanation of the data, hints of a Trump-Alfa connection, has always been the dishonesty of those who collected the data. The above indictment confirms they were at this level of dishonesty. It doesn’t mean the DNS requests didn’t happen, but that their anomalous nature can be created by deletion of explanatory data.

Lastly, we see in this indictment the problem with “experts”.

Sadly, this didn’t happen. Even experts are biased. The original Slate story quoted Paul Vixie, who hates Trump, who was willing to believe it rather than question it. It’s not necessarily Vixie’s fault: the Slate reporter gave the experts they quoted a brief taste of the data, then pretended their response was a full in-depth analysis, rather than a quick hot-take. It’s not clear that Vixie really still stands behind the conclusions in the story.

But of the rest of the “experts” in the field, few really care. Most hate Trump, and therefore, wouldn’t challenge anything that hurts Trump. Experts who like Trump also wouldn’t put the work into it, because nobody would listen to them. Most people choose sides — they don’t care about the evidence.

This indictment vindicates my analysis in those blogposts linked above. My analysis shows convincingly that Trump had no real connection to the domain. I can’t explain the anomaly, why Alfa Bank is so interested in a domain containing the word “trump”, but I can show that conspirational communications is the least likely explanation.

How not to get caught in law-enforcement geofence requests

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/09/how-not-to-get-caught-in-law.html

I thought I’d write up a response to this question from well-known 4th Amendment and CFAA lawyer Orin Kerr:

First, let me address the second part of his tweet, whether I’m technically qualified to answer this. I’m not sure, I have only 80% confidence that I am. Hence, I’m writing this answer as blogpost hoping people will correct me if I’m wrong.

There is a simple answer and it’s this: just disable “Location” tracking in the settings on the phone. Both iPhone and Android have a one-click button to tap that disables everything.

The trick is knowing which thing to disable. On the iPhone it’s called “Location Services”. On the Android, it’s simply called “Location”.

If you do start googling around for answers, you’ll find articles upset that Google is still tracking them. That’s because they disabled “Location History” and not “Location”. This left “Location Services” and “Web and App Activity” still tracking them. Disabling “Location” on the phone disables all these things [*].

It’s that simple: one click and done, and Google won’t be able to report your location in a geofence request.

I’m pretty confident in this answer, despite what your googling around will tell you about Google’s pernicious ways. But I’m only 80% confident in my answer. Technology is complex and constantly changing.

Note that the answer is very different for mobile phone companies, like AT&T or T-Mobile. They have their own ways of knowing about your phone’s location independent of whatever Google or Apple do on the phone itself. Because of modern 4G/LTE, cell towers must estimate both your direction and distance from the tower. I’ve confirmed that they can know your location to within 50 feet. There are limitations to this, it depends upon whether you are simply in range of the tower or have an active phone call in progress. Thus, I think law enforcement prefers asking Google.

Another example is how my car uses Google Maps all the time, and doesn’t have privacy settings. I don’t know what it reports to Google. So when I rob a bank, my phone won’t betray me, but my car will.

Note that “disabling GPS” isn’t sufficient. I include the screenshot above because of how it mentions the phone relies upon WiFi, BlueTooth, and cell tower info to also confirm your location. Tricking GPS will do little to stop your phone from knowing your location.

I only know about this from the phone side of things and not actual legal cases. I’d love to see the sort of geofence results the FBI gets. There might be some subtle thing that I missed about how Android works with mobile companies, such as this old story where Android phones reported cell tower information to Google (since removed). Or worse, there might be something completely obvious I should’ve known about that everyone seems to know, but for some reason I simply forgot.

Both Apple and Google are upfront about what private information they do and don’t track and how to disable it. Thus, while I think they may do something on accident hidden from view, I don’t think there’s anything going on that isn’t documented. And what’s documented this concern is that simply turning off the “Location” button.



Update: Many comments note that Google does log the IP address of requests, and that IP addresses can sometimes be geolocated.

Well, yes and no. It’s not something companies log in that way. Thus, when given a geofence request for everything within a certain physical location, logs containing only IP addresses wouldn’t be something covered by the request. The log would need a record of the physical location to be covered. Moreover, geolocation by IP address is incredibly inaccurate, often telling you only what city or neighborhood where the IP address is located. Even if Google logged a record of the best-guess about location, I’m still not sure whether it would be an appropriate response to a geofence request.

In any event, this wouldn’t apply to mobile IP addresses. In America, consumer mobile phones don’t have public IP addresses by share the same pool of private addresses. Thus, the IP address from a mobile phone is meaningless for location purposes.

Now you can create a hypothetical situation like the following:

  • a Capitol Hill protestor logs onto a nearby WiFi (meaning: it’s not the mobile IP address in question, but the IP address of the WiFi hotspot)
  • the geolocation record of that WiFi hotspot is actually accurate
  • requests to Google resolves that geolocation when it logs the IP address
  • they give such IP/location logs in response to geofence request

Then, yes, my argument is defeated, a hypothetical geofence request might then get you.

Which I actually like. It’s a good demonstration of why I doubt myself at the top of the post. I don’t think this scenario is likely, and hence don’t consider it a reasonable rebuttal, but “unlikely” doesn’t mean “impossible”. I’m still pretty confident that a one-click disabling “Location” is all you need to defeat geofence warrants given to Google.

Note that the discussion of this blogpost is just about the “geofence request to Google”. This “Capital Hill WiFi” hypothetical is unlikely to help with requests by location, but of course would for requests by IP address. Law enforcement could certainly ask Google for a list of users that came in via the Capital Hill WiFi IP address.

Of course you can’t trust scientists on politics

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/07/of-course-you-cant-trust-scientists-on.html

Many people make the same claim as this tweet. It’s obviously wrong. Yes,, the right-wing has a problem with science, but this isn’t it.

First of all, people trust airplanes because of their long track record of safety, not because of any claims made by scientists. Secondly, people distrust “scientists” when politics is involved because of course scientists are human and can get corrupted by their political (or religious) beliefs.

And thirdly, the concept of “trusting scientific authority” is wrong, since the bedrock principle of science is distrusting authority. What defines sciences is how often prevailing scientific beliefs are challenged.

Carl Sagan has many quotes along these lines that eloquently expresses this:

A central lesson of science is that to understand complex issues (or even simple ones), we must try to free our minds of dogma and to guarantee the freedom to publish, to contradict, and to experiment. Arguments from authority are unacceptable.

If you are “arguing from authority”, like Paul Graham is doing above, then you are fundamentally misunderstanding both the principles of science and its history.

We know where this controversy comes from: politics. The above tweet isn’t complaining about the $400 billion U.S. market for alternative medicines, a largely non-political example. It’s complaining about political issues like vaccines, global warming, and evolution.

The reason those on the right-wing resist these things isn’t because they are inherently anti-science, it’s because the left-wing is. They left has corrupted and politicized these topics. The “Green New Deal” contains very little that is “Green” and much that is “New Deal”, for example. The left goes from the fact “carbon dioxide absorbs infrared” to justify “we need to promote labor unions”.

Take Marjorie Taylor Green’s (MTG) claim that she doesn’t believe in the Delta variant because she doesn’t believe in evolution. Her argument is laughably stupid, of course, but it starts with the way the left has politicized the term “evolution”.

The “Delta” variant didn’t arise from “evolution”, it arose because of “mutation” and “natural selection”. We know the “mutation” bit is true, because we can sequence the complete DNA and detect that changes happen. We know that “selection” happens, because we see some variants overtake others in how fast they spread.

Yes, “evolution” is synonymous with mutation plus selection, but it’s also a politically loaded term that means a lot of additional things. The public doesn’t understand mutation and natural-selection, because these concepts are not really taught in school. Schools don’t teach students to understand these things, they teach students to believe.

The focus of science eduction in school is indoctrinating students into believing in “evolution” rather than teaching the mechanisms of “mutation” and “natural-selection”. We see the conflict in things like describing the evolution of the eyeball, which Creationists “reasonably” believe is too complex to have evolved this way. I put “reasonable” in quotes here because it’s just the “Gods in the gaps” argument, which credits God for everything that science can’t explain, which isn’t very smart. But at the same time, science textbooks go too far, refusing to admit their gaps in knowledge here. The fossil records shows a lot of complexity arising over time through steady change — it just doesn’t show anything about eyeballs.

In other words, it’s possible for a kid to graduate high-school with a full understanding of science, including mutation, selection, and the fossil record, while believing God created the eyeball. This is anathema to educators, who would rather students “believe in evolution” than understand it.

Thus, “believing” in the “evolution” of the Delta variant becomes this horrible political debate because the left-wing has corrupted science. You have politicians like MTG virtue signaling their opposition to evolution in what should be a non-political, neutral science discussion.

The political debate over vaccines isn’t the vaccines themselves, but forcing people to become vaccinated.

The evidence is clear that the covid vaccines are in your own (and your kids’) best interest. If we left it there, few would be challenging the science. There is no inherent right-wing opposition to vaccines. Indeed, Trump championed the covid vaccines, trying to take credit for their development. 

But the left-wing chose a different argument, that covid vaccines are in the best interest of society, and therefore, that government must coerce/force people to become vaccinated. It’s at this point that political opposition appears on the right-wing. It’s the same whether you are describing the debate in the United States, Europe, or Asia.

We know the juvenile method which people defend their political positions. Once people decide to oppose “forcible vaccination”, they then build a position that vaccines aren’t “good” anyway.

Thus, you’ll get these nonsense arguments from people who have get their opinions from dodgy blogs/podcasts, like “these don’t even meet the definition of a vaccine”. The started from the political goal first, and then looked for things that might support it, no matter how intellectually vacuous. It’s frustrating trying to argue against the garbage arguments they’ll toss up.

But at the same time, the left is no better. The tweet above is equally a vacuous meme, that they repeat because it sounds good, not because they’ve put much thought into it. It’s simply an argument that strokes the prejudices of those who repeat it, rather than being a robust argument that can change the minds of opponents. It’s obviously false: people trust planes because of their track record, not because of scientists claim. They trust scientists and doctors on non-political things, but rightly distrust their pronouncements on politically-tainted issues. And lastly, the above argument is completely anti-scientific — science is all about questioning and doubting.

Risk analysis for DEF CON 2021

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/07/risk-analysis-for-def-con-2021.html

It’s the second year of the pandemic and the DEF CON hacker conference wasn’t canceled. However, the Delta variant is spreading. I thought I’d do a little bit of risk analysis. TL;DR: I’m not canceling my ticket, but changing my plans what I do in Vegas during the convention.

First, a note about risk analysis. For many people, “risk” means something to avoid. They work in a binary world, labeling things as either “risky” (to be avoided) or “not risky”. But real risk analysis is about shades of gray, trying to quantify things.

The Delta variant is a mutation out of India that, at the moment, is particularly affecting the UK. Cases are nearly up to their pre-vaccination peaks in that country.

Note that the UK has already vaccinated nearly 70% of their population — more than the United States. In both the UK and US there are few preventive measures in place (no lockdowns, no masks) other than vaccines.

 

Thus, the UK graph is somewhat predictive of what will happen in the United States. If we time things from when the latest wave hit the same levels as peak of the first wave, then it looks like the USA is only about 1.5 months behind the UK.

It’s another interesting lesson about risk analysis. Most people experience these things as sudden changes. One moment, everything seems fine, and cases are decreasing. The next moment, we are experiencing a major new wave of infections. It’s especially jarring when the thing we are tracking is exponential. But we can compare the curves and see that things are totally predictable. In about another 1.5 months, the US will experience a wave that looks similar to the UK wave.

Sometimes the problem is that the change is inconceivable. We saw that recently with 1-in-100 year floods in Germany. Weather forecasters predicted 1-in-100 level of floods days in advance, but they still surprised many people.

Nevada is ahead of the curve in the US, probably because Vegas is such a hub of unvaccinated people going on vacation. Because of exponential growth, there’s a good chance that in 2 weeks, that peek will be triple where it is now. It may not look like “time to cancel your ticket” now, but it probably will in 2 weeks when the event takes place. In other words, the closer we get to the event, the more people will look at this graph and cancel their tickets.

The risk is really high for the unvaccinated, but much less for the vaccinated. We see that in the death rates in the UK, which are still low, even accounting for the 2 week lag that you see between spikes in infections and spikes in deaths. This is partly due to the fact that while the new variant infects the vaccinated, it doesn’t cause much harm. Also, I suspect it’s due to how much better we are at treating infections if they do require a hospital visit.

But still, death isn’t the major concern. It appears the major concern is long term-lung (and other organ) damage caused by even mild cases. Thus, one should fear infection even if one believes they have no chance of dying.

So here’s my personal risk analysis: I’m not canceling my ticket. Instead, I’m changing my plans of what I do. For the most part, this means that wherever there’s a crowd, go someplace else.
It also means I’m going to take this opportunity to do things I’ve never had the opportunity to do before: go outside of Vegas. I plan on renting a car to go down to the Grand Canyon, Hoover Dam, and do hikes around the area (like along Lake Meade, up in the canyons, and so on). This means spending most of my time away from people.
During the pandemic, outdoor activities (without masks, socially distanced) is one of the safest things you can do, especially considering the exercise and vitamin D that you’ll be getting.
Also, airplanes aren’t much of a worry. They have great filtration and as far as anybody can tell, haven’t resulted in superspreader events this entire pandemic.
The real point of this blogpost is the idea of “predictions”. This post predicts that US infection rates will be spiking in 1.5 months in a curve that looks similar to the UK, and that in 2 weeks during DEFCON, Nevada’s infection rates will be around 3 times higher. The biggest lesson about risk analysis is that it’s usually done in hind-sight, what people should’ve known, once the outcome is known. It’s much harder doing it the other way around, estimating what might happen in the future.

Ransomware: Quis custodiet ipsos custodes

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/07/ransomware-quis-custodiet-ipsos-custodes.html

Many claim that “ransomware” is due to cybersecurity failures. It’s not really true. We are adequately protecting users and computers. The failure is in the inability of cybersecurity guardians to protect themselves. Ransomware doesn’t make the news when it only accesses the files normal users have access to. The big ransomware news events happened because ransomware elevated itself to that of an “administrator” over the network, giving it access to all files, including online backups.

Generic improvements in cybersecurity will help only a little, because they don’t specifically address this problem. Likewise, blaming ransomware on how it breached perimeter defenses (phishing, patches, password reuse) will only produce marginal improvements. Ransomware solutions need to instead focus on looking at the typical human-operated ransomware killchain, identify how they typically achieve “administrator” credentials, and fix those problems. In particular, large organizations need to redesign how they handle Windows “domains” and “segment” networks.

I read a lot of lazy op-eds on ransomware. Most of them claim that the problem is due to some sort of moral weakness (laziness, stupidity, greed, slovenliness, lust). They suggest things like “taking cybersecurity more seriously” or “do better at basic cyber hygiene”. These are “unfalsifiable” — things that nobody would disagree with, meaning they are things the speaker doesn’t really have to defend. They don’t rest upon technical authority but moral authority: anybody, regardless of technical qualifications, can have an opinion on ransomware as long as they phrase it in such terms.

Another flaw of these “unfalsifiable” solutions is that they are not measurable. There’s no standard definition for “best practices” or “basic cyber hygiene”, so there no way to tell if you aren’t already doing such things, or the gap you need to overcome to reach this standard. Worse, some people point to the “NIST Cybersecurity Framework” as the “basics” — but that’s a framework for all cybersecurity practices. In other words, anything short of doing everything possible is considered a failure to follow the basics.

In this post, I try to focus on specifics, while at the same time, making sure things are broadly applicable. It’s detailed enough that people will disagree with my solutions.

The thesis of this blogpost is that we are failing to protect “administrative” accounts. The big ransomware attacks happen because the hackers got administrative control over the network, usually the Windows domain admin. It’s with administrative control that they are able to cause such devastation, able to reach all the files in the network, while also being able to delete backups.

The Kaseya attacks highlight this particularly well. The company produces a product that is in turn used by “Managed Security Providers” (MSPs) to administer the security of small and medium sized businesses. Hackers found and exploited a vulnerability in the product, which gave them administrative control of over 1000 small and medium sized businesses around the world.

The underlying problems start with the way their software gives indiscriminate administrative access over computers. Then, this software was written using standard software techniques, meaning, with the standard vulnerabilities that most software has (such as “SQL injection”). It wasn’t written in a paranoid, careful way that you’d hope for software that poses this much danger.

A good analogy is airplanes. A common joke refers to the “black box” flight-recorders that survive airplane crashes, that maybe we should make the entire airplane out of that material. The reason we can’t do this is that airplanes would be too heavy to fly. The same is true of software: airplane software is written with extreme paranoia knowing that bugs can lead to airplanes falling out of the sky. You wouldn’t want to write all software to that standard, because it’d be too costly.

This analogy tells us we can’t write all software to the highest possible standard. However, we should write administrative software (like Kaseya) to this sort of standard. Anything less invites something like the massive attack we saw in the last couple weeks.

Another illustrative example is the “PrinterNightmare” bug. The federal government issued a directive telling everyone under it’s authority (executive branch, military) to disable the Printer Spooler on “domain controllers”. The issue here is that this service should never have been enabled on “domain controllers” in the first place.

Windows security works by putting all the security eggs into a single basket known as “Active Directory”, which is managed by several “Domain Controller” (AD DC) servers. Hacking a key DC gives the ransomware hacker full control over the network. Thus, we should be paranoid about protecting DCs. They should not be running any service other than those needed to fulfill their mission. The more additional services they provide, like “printing”, the larger the attack surface, the more likely they can get hacked, allowing hackers full control over the network. 

Yet, I rarely see Domain Controllers with this level of paranoid security. Instead, when an organization has a server, they load it up with lots of services, including those for managing domains. Microsoft’s advice securing domain controllers “recommends” a more paranoid attitude, but only as one of the many other things it “recommends”.

When you look at detailed analysis of ransomware killchains, you’ll find the most frequently used technique is “domain admin account hijacking“. Once a hacker controls a desktop computer, they wait for an administrator to login, then steal the administrators credentials. There are various ways this happens, the most famous being “pass-the-hash” (which itself is outdated, but good analogy for still-current techniques). Hijacking even restricted administrator accounts can lead to elevation to unrestricted administrator privileges over the entire network.

If you had to fix only one thing in your network, it would be this specific problem.

Unfortunately, I only know how to attack this problem as a pentester, I don’t know how to defend against it. I feel that separating desktop admins and server/domain admins into separate, non-overlapping groups is the answer, but I don’t know how to achieve this in practice. I don’t have enough experience as a defender to know how to make reasonable tradeoffs.

In addition to attacking servers and accounts, ransomware attackers also target networks. Organizations focus on “perimeter security”, where the major security controls are between the public Internet and the internal organization. They also need an internal perimeter, between the organization’s network and the core servers.

There are lots of tools for doing this: VLANs, port-isolation, network segmentation, read-only Domain Controllers, and the like.

As an attacker, I see the lack of these techniques. I don’t know why defenders doin’t use them more. There might be good reasons. I suspect the biggest problem is inertia: networks were designed back when these solutions were hard, and change would break things.

In summary, I see the major problem exploited by ransomware is that we don’t protect “administrators” enough. We don’t do enough to protect administrative software, servers, accounts, or network segments. When we look at ransomware, the big cases that get splashed across the news, its not because they compromised a single desktop, but because they got administrative control over the entire network and thus were able to encrypt everything.

Sadly, as a person experience in attack (red-team) and exploiting these problems, I can see the problem. However, I have little experience as a defender (blue-team), and while solutions look easy in theory, I’m not sure what can be done in practice to mitigate these threats.

I do know that general hand-waving, exhorting people to “take security seriously” and perform “cyber hygiene” is the least helpful answer to the problem.

Some quick notes on SDR

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/07/some-quick-notes-on-sdr.html

I’m trying to create perfect screen captures of SDR to explain the world of radio around us. In this blogpost, I’m going to discuss some of the imperfect captures I’m getting, specifically, some notes about WiFi and Bluetooth.

An SDR is a “software defined radio” which digitally samples radio waves and uses number crunching to decode the signal into data. Among the simplest thing an SDR can do is look at a chunk of spectrum and see signal strength. This is shown below, where I’m monitoring part of the famous 2.4 GHz pectrum used by WiFi/Bluetooth/microwave-ovens:

There are two panes. The top shows the current signal strength as graph. The bottom pane is the “waterfall” graph showing signal strength over time, display strength as colors: black means almost no signal, blue means some, and yellow means a strong signal.

The signal strength graph is a bowl shape, because we are actually sampling at a specific frequency of 2.42 GHz, and the further away from this “center”, the less accurate the analysis. Thus, the algorithms think there is more signal the further away from the center we are.

What we do see here is two peaks, at 2.402 GHz toward the left and 2.426 GHz toward the right (which I’ve marked with the red line). These are the “Bluetooth beacon” channels. I was able to capture the screen at the moment some packets were sent, showing signal at this point. Below in the waterfall chart, we see packets constantly being sent at these frequencies.

We are surrounded by devices giving off packets here: our phones, our watches, “tags” attached to devices, televisions, remote controls, speakers, computers, and so on. This is a picture from my home, showing only my devices and perhaps my neighbors. In a crowded area, these two bands are saturated with traffic.

The 2.4 GHz region also includes WiFi. So I connected to a WiFi access-point to watch the signal.

WiFi uses more bandwidth than Bluetooth. The term “bandwidth” is used today to mean “faster speeds”, but it comes from the world of radio where it quite literally means the width of the band. The width of the Bluetooth transmissions seen above is 2 MHz, the width of the WiFi band shown here is 20 MHz.

It took about 50 screenshots before getting these two. I had to hit the “capture” button right at the moment things were being transmitted. And easier way is a setting that graphs the current signal strength compared to the maximum recently seen as a separate line. That’s shown below: the instant it was taken, there was no signal, but it shows the maximum of recent signals as a separate line:

You can see there is WiFi traffic on multiple channels. My traffic is on channel #1 at 2.412 GHz. My neighbor has traffic on channel #6 at 2.437 GHz. Another neighbor has traffic on channel #8 at 2.447 GHz. WiFi splits the spectrum assigned to it into 11 overlapping channels set 5 MHz apart.
Now the reason I wanted to take these pictures was to highlight the difference between old WiFi (802.11b) and new WiFi (802.11n). The newer standard uses the spectrum more efficiently. Notice in the picture above how signal strength for a WiFi channel is strongest in the center but gets weaker toward the edges. That means it’s not fully using all the band.
Newer WiFi uses a different scheme to encode data into radio waves, using all the band given to it. We can see the difference in shape below, when I change from 802.11b to 802.11n:

Instead of a curve it’s more of a square block. It fills its entire 20 MHz bandwidth instead of only using the center.
What we see here is the limits of math and physics, known as the Shannon Limit, that governs the maximum possible speed for something like WiFi (or mobile phone radios like LTE). It’s simply the size of that box: its width times its height. The width is measured in frequency, 20 MHz wide. It’s height is signal strength measure above the noise floor (which should be straight line across the bottom of our graph, but as I mentioned before, is shown in this SDR by a curved line increasingly inaccurate near the edges).
As we move toward faster and faster speeds, we cannot exceed this theoretical limit.
One solution is directional antennas, such as the yagi antennas you see on top of houses or satellite dishes. A directional antenna or dish means getting a stronger signal with less noise — thus, increasing the “height” of the box.
The same effect can be achieved with something called “phased arrays”, using multiple antennas that transmit/receive at (very) slightly different times, such that waves they produce reinforce each other in one direction but cancel each other out in other directions. This is how SpaceX “Starlink” space-based Internet works. The low Earth orbit satellites whizzing by overhead travel too fast to keep an antenna pointed at them, so their antenna is a phases array instead. The antennas are fixed, but the timing is slightly altered to aim the beam toward the satellite.
What’s even more interesting is MIMO: receiving different signals on different antennas. With fancy circuits and math, doubling the number of antennas doubles the effective bandwidth.
The latest mobile phones and WiFi use MIMO and phases arrays to increase bandwidth.
But mostly, higher frequencies give more bandwidth. That’s why WiFi at 5 GHz is better — bands are a minimum of 40 MHz (instead of 20 MHz as in 2.4 GHz WiFi), are more commonly 80 MHz, and can go up to 160 MHz.
Anyway, these are more imperfect picture I’m creating to explain WiFi and Bluetooth. At some point in the time, I’ll be generating more perfect ones.

When we’ll get a 128-bit CPU

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/06/when-well-get-128-bit-cpu.html

On Hacker News, this article claiming “You won’t live to see a 128-bit CPU” is trending”. Sadly, it was non-technical, so didn’t really contain anything useful. I thought I’d write up some technical notes.

The issue isn’t the CPU, but memory. It’s not about the size of computations, but when CPUs will need more than 64-bits to address all the memory future computers will have. It’s a simple question of math and Moore’s Law.

Today, Intel’s server CPUs support 48-bit addresses, which is enough to address 256-terabytes of memory — in theory. In practice, Amazon’s AWS cloud servers are offered up to 24-terabytes, or 45-bit addresses, in the year 2020.

Doing the math, it means we have 19-bits or 38-years left before we exceed the 64-bit registers in modern processors. This means that by the year 2058, we’ll exceed the current address size and need to move 128-bits. Most people reading this blogpost will be alive to see that, though probably retired.

There are lots of reasons to suspect that this event will come both sooner and later.

It could come sooner if storage merges with memory. We are moving away from rotating platters of rust toward solid-state storage like flash. There are post-flash technologies like Intel’s Optane that promise storage that can be accessed at speeds close to that of memory. We already have machines needing petabytes (at least 50-bits worth) of storage.

Addresses often contain more just the memory address, but also some sort of description about the memory. For many applications, 56-bits is the maximum, as they use the remaining 8-bits for tags.

Combining those two points, we may be only 12 years away from people starting to argue for 128-bit registers in the CPU.

Or, it could come later because few applications need more than 64-bits, other than databases and file-systems.

Previous transitions were delayed for this reason, as the x86 history shows. The first Intel CPUs were 16-bits addressing 20-bits of memory, and the Pentium Pro was 32-bits addressing 36-bits worth of memory.

The few applications that needed the extra memory could deal with the pain of needing to use multiple numbers for addressing. Databases used Intel’s address extensions, almost nobody else did. It took 20 years, from the initial release of MIPS R4000 in 1990 to Intel’s average desktop processor shipped in 2010 for mainstream apps needing larger addresses.

For the transition beyond 64-bits, it’ll likely take even longer, and might never happen. Working with large datasets needing more than 64-bit addresses will be such a specialized discipline that it’ll happen behind libraries or operating-systems anyway.

So let’s look at the internal cost of larger registers, if we expand registers to hold larger addresses.

We already have 512-bit CPUs — with registers that large. My laptop uses one. It supports AVX-512, a form of “SIMD” that packs multiple small numbers in one big register, so that he can perform identical computations on many numbers at once, in parallel, rather than sequentially. Indeed, even very low-end processors have been 128-bit for a long time — for “SIMD”.

In other words, we can have a large register file with wide registers, and handle the bandwidth of shipping those registers around the CPU performing computations on them. Today’s processors already handle this for certain types of computations.

But just because we can do many 64-bit computations at once (“SIMD”) still doesn’t mean we can do a 128-bit computation (“scalar”). Simple problems like “carry” get difficult as numbers get larger. Just because SIMD can do multiple small computations doesn’t tell us what one large computation will cost. This was why it took an extra decade for Intel to make the transition — they added 64-bit MMX registers for SIMD a decade before they added 64-bit for normal computations.

The above discussion is about speed, but it’s also a concern for power consumption. Mobile devices were a decade later (than desktops) adopting 64-bits, exceeding the 32-bit barrier just now. It’s likely they be decades late getting to 128-bits. Even if you live to see supercomputers transition to 128-bits, you probably won’t live to see your mobile device transition.

Now let’s look at the market. What the last 40 years has taught us is that old technology doesn’t really day, it’s that it stops growing — with all the growth happening in some new direction. 40 years ago, IBM dominated computing with their mainframes. Their mainframe business is as large as ever, it’s just that all the growth in the industry has been in other directions than the mainframe. The same thing happened to Microsoft’s business, Windows still dominates the desktop, but all the growth in the last 15 years has bypassed the desktop, moving to mobile devices and the cloud.

40 years from now, it won’t be an issue of mainstream processors jumping from 64-bits to 128-bits, like the previous transitions. I’m pretty sure we’ll have ossified into some 64-bit standard like ARM. Instead, I think 128-bit systems will come with a bunch of other radical changes. It’ll happen on the side of computers, much like how GPUs evolved separately from mainstream CPUs can became increasingly integrated into them.

Anatomy of how you get pwned

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/04/anatomy-of-how-you-get-pwned.html

Today, somebody had a problem: they kept seeing a popup on their screen, and obvious scam trying to sell them McAfee anti-virus. Where was this coming from?

In this blogpost, I follow this rabbit hole on down. It starts with “search engine optimization” links and leads to an entire industry of tricks, scams, exploiting popups, trying to infect your machine with viruses, and stealing emails or credit card numbers.

Evidence of the attack first appeared with occasional popups like the following. The popup isn’t part of any webpage.

This is obviously a trick. But from where? How did it “get on the machine”?

There’s lots of possible answers. But the most obvious answer (to most people), that your machine is infected with a virus, is likely wrong. Viruses are generally silent, doing evil things in the background. When you see something like this, you aren’t infected … yet.

Instead, things popping with warnings is almost entirely due to evil websites. But that’s confusing, since this popup doesn’t appear within a web page. It’s off to one side of the screen, nowhere near the web browser.

Moreover, we spent some time diagnosing this. We restarted the webbrowser in “troubleshooting mode” with all extensions disabled and went to a clean website like Twitter. The popup still kept happening.

As it turns out, he had another windows with Firefox running under a different profile. So while he cleaned out everything in this one profile, he wasn’t aware the other one was still running

This happens a lot in investigations. We first rule out the obvious things, and then struggle to find the less obvious explanation — when it was the obvious thing all along.

In this case, the reason the popup wasn’t attached to a browser window is because it’s a new type of popup notification that’s suppose to act more like an app and less like a web page. It has a hidden web page underneath called a “service worker”, so the popups keep happening when you think the webpage is closed.

Once we figured the mistake of the other Firefox profile, we quickly tracked this down and saw that indeed, it was in the Notification list with Permissions set to Allow. Simply changing this solved the problem.

Note that the above picture of the popup has a little wheel in the lower right. We are taught not to click on dangerous thing, so the user in this case was avoiding it. However, had the user clicked on it, it would’ve led him straight here to the solution. I can’t recommend you click on such a thing and trust it, because that means in the future, malicious tricks will contain such safe looking icons that aren’t so safe.

Anyway, the next question is: which website did this come from?

The answer is Google.

In the news today was the story of the Michigan guys who tried to kidnap the governor. The user googled “attempted kidnap sentencing guidelines“. This search produced a page with the following top result:

Google labels this a “featured snippet”. This isn’t an advertisement, not a “promoted” result. But it’s a link that Google’s algorithms thinks is somehow more worthy than the rest.

This happened because hackers tricked Google’s algorithms. It’s been a constant cat and mouse game for 20 years, in an industry known as “search engine optimization” or SEO. People are always trying to trick google into placing their content highest, both legitimate companies and the quasi-illegitimate that we see here. In this case, they seem to have succeeded.
The way this trick works is that the hackers posted a PDF instead of a webpage containing the desired text. Since PDF documents are much less useful for SEO purposes, google apparently trusts them more.
But the hackers have found a way to make PDFs more useful. They designed it to appear like a webpage with the standard CAPTCHA. You click anywhere on the page such as saying “I’m not robot”, and it takes you to the real webstie.

But where is the text I was promised in the Google’s search result? It’s there, behind the image. PDF files have layers. You can put images on top that hides the text underneath. Humans only see the top layer, but google’s indexing spiders see all the layers, and will index the hidden text. You can verify this by downloading the PDF and using tools to examine the raw text:

If you click on the “I am not robot” in the fake PDF, it takes you to a page like the following:

Here’s where the “hack” happened. The user misclicked on “Allow” instead of “Block” — accidentally. Once they did that, popups started happening, even when this window appeared to go away.

The lesson here is that “misclicks happen”. Even the most knowledgeable users, the smartest of cybersecurity experts, will eventually misclick themselves.

As described above, once we identified this problem, we were able to safely turn off the popups by going to Firefox’s “Notification Permissions”.

Note that the screenshots above are a mixture of Firefox images from the original user, and pictures of Chrome where I tried to replicate the attack in one of my browsers. I didn’t succeed — I still haven’t been able to get any popups appearing on my computer.

So I tried a bunch of different browsers: Firefox, Chrome, and Brave on both Windows and macOS.

Each browser produced a different result, a sort of A/B testing based on the User-Agent (the string sent to webservers that identifies which browser you are using). Sometime following the hostile link from that PDF attempted to install a popup script in our original example, but sometimes it tried something else.

For example, on my Firefox, it tried to download a ZIP file containing a virus:

When I attempt to download, Firefox tells me it’s a virus — probably because Firefox knows the site where it came from is evil.

However, Microsoft’s free anti-virus didn’t catch it. One reason is that it comes as an encrypted zip file. In order to open the file, you have to first read the unencrypted text file to get the password — something humans can do but anti-virus products aren’t able to do (or at least, not well).

So I opened the password file to get the password (“257048169”) and extracted the virus. This is mostly safe — as long as I don’t run it. Viruses are harmless sitting on your machine as long as they aren’t running. I say “mostly” because even for experts, “misclicks happen”, and if I’m not careful, I may infect my machine.

Anyway, I want to see what the virus actually is. The easiest way to do that is upload it to VirusTotal, a website that runs all the known anti-virus programs on a submission to see what triggers what. It tells me that somebody else uploaded the same sample 2 hours ago, and that a bunch of anti-virus vendors detect it, with the following names:
With VirusTotal, you can investigate why anti-virus products think it may be a virus. 
For example, anti-virus companies will run viruses to see what they do. They run them in “emulated” machines that are a lot slower, but safer. If viruses find themselves running in an emulated environment, then they stop doing all the bad behaviors the anti-virus programs might detection. So they repeated check the timestamp to see how fast they are running — if too slow, they assume emulation.
But this itself is a bad behavior. This timestamp detection is one of the behaviors the anti-virus programs triggered on as suspicious.

You can go investigate on VirusTotal other things it found with this virus.

Viruses and disconnected popups wasn’t the only trick. In yet another attempt with web browsers, the hostile site attempt to open lots and lots of windows full of advertising. This is a direct way they earn money — hacking the advertising companies rather than hacking you.

In yet another attempt with another browser, this time from my MacBook air, it asked for an email address:

I happily obliged, giving it a fake address.

At this point, the hackers are going to try to use the same email and password to log into Gmail, into a few banks, and so on. It’s one of the top hacks these days (if not the most important hack) — since most people reuse the same password for everything, even though it’s not asking your for your Gmail or bank password, most of the time people will simply reuse them anyway. (This is why you need to keep important passwords separate from unimportant ones — and write down your passwords or use a password manager).
Anyway, I now get the next webpage. This is a straight up attempt to steal my credit card — maybe. 

This is a website called “AppCine.net” that promises streaming movies, for free signup, but requires a credit card.

This may be a quasi-legitimate website. I saw “quasi” because their goal isn’t outright credit card fraud, but a “dark pattern” whereby they make it easy to sign up for the first month free with a credit card, and then make it nearly impossible to stop the service, where they continue to bill you month after month. As long as the charges are small each month, most people won’t bother going through all the effort canceling the service. And since it’s not actually fraud, people won’t call their credit card company and reverse the charges, since they actually did sign up for the service and haven’t canceled it.
It’s a slimy thing the Trump campaign did in the last election. Their website asked for one time donations but tricked people into unwittingly making it a regular donation. This caused a lot of “chargebacks” as people complained to their credit card company.
In truth, everyone does the same pattern: makes it easy to sign up, and sign up for more than you realize, and then makes it hard to cancel. I thought I’d canceled an AT&T phone but found out they’d kept billing me for 3 years, despite the phone no longer existing and using their network.
They probably have a rewards program. In other words, they aren’t out there doing SEO hacking of google. Instead, they pay others to do it for them, and then give a percentage profit, either for incoming links, but probably “conversion”, money whenever somebody actually enters their credit card number and signs up.
Those people are in tern a different middleman. It probably goes like this:
  • somebody skilled at SEO optimization, who sends links to a broker
  • a broker who then forwards those links to other middlemen
  • middlemen who then deliver those links to sites like AppCine.net that actually ask for an email address or credit card
There’s probably even more layers — like any fine tuned industry, there are lots of specialists who focus on doing their job well.
Okay, I’ll play along, and I enter a credit card number to see what happens (I have bunch of used debit cards to play this game). This leads to an error message saying the website is down and they can’t deliver videos for me, but then pops up another box asking for my email, from yet another movie website:

This leads to yet another site:

It’s an endless series. Once a site “converts” you, it then simply sells the link back to another middleman, who then forwards you on to the next. I could probably sit there all day with fake email addresses and credit cards and still not come to the end of it all.

Summary

So here’s what we found.
First, there was a “search engine optimization” hacker who specializes in getting their content at the top of search results for random terms.
Second, they pass hits off to a broker who distributes the hits to various hackers who pay them. These hackers will try to exploit you with:
  • popups pretending to be anti-virus warnings that show up outside the browser
  • actual virus downloads in encrypted zips that try to evade anti-virus, but not well
  • endless new windows selling you advertising
  • steal your email address and password, hoping that you’ve simply reused one from legitimate websites, like Gmail or your bank
  • signups for free movie websites that try to get your credit card and charge you legally
Even experts get confused. I had trouble helping this user track down exactly where the popup was coming from. Also, any expert can misclick and make the wrong thing happen — this user had been clicking the right thing “Block” for years and accidentally hit “Allow” this one time.

Ethics: University of Minnesota’s hostile patches

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/04/ethics-university-of-minnesotas-hostile.html

The University of Minnesota (UMN) got into trouble this week for doing a study where they have submitted deliberately vulnerable patches into open-source projects, in order to test whether hostile actors can do this to hack things. After a UMN researcher submitted a crappy patch to the Linux Kernel, kernel maintainers decided to rip out all recent UMN patches.

Both things can be true:

  • Their study was an important contribution to the field of cybersecurity.
  • Their study was unethical.
It’s like Nazi medical research on victims in concentration camps, or U.S. military research on unwitting soldiers. The research can simultaneously be wildly unethical but at the same time produce useful knowledge.
I’d agree that their paper is useful. I would not be able to immediately recognize their patches as adding a vulnerability — and I’m an expert at such things.
In addition, the sorts of bugs it exploits shows a way forward in the evolution of programming languages. It’s not clear that a “safe” language like Rust would be the answer. Linux kernel programming requires tracking resources in ways that Rust would consider inherently “unsafe”. Instead, the C language needs to evolve with better safety features and better static analysis. Specifically, we need to be able to annotate the parameters and return statements from functions. For example, if a pointer can’t be NULL, then it needs to be documented as a non-nullable pointer. (Imagine if pointers could be signed and unsigned, meaning, can sometimes be NULL or never be NULL).
So I’m glad this paper exists. As a researcher, I’ll likely cite it in the future. As a programmer, I’ll be more vigilant in the future. In my own open-source projects, I should probably review some previous pull requests that I’ve accepted, since many of them have been the same crappy quality of simply adding a (probably) unnecessary NULL-pointer check.
The next question is whether this is ethical. Well, the paper claims to have sign-off from their university’s IRB — their Institutional Review Board that reviews the ethics of experiments. Universities created IRBs to deal with the fact that many medical experiments were done on either unwilling or unwitting subjects, such as the Tuskegee Syphilis Study. All medical research must have IRB sign-off these days.
However, I think IRB sign-off for computer security research is stupid. Things like masscanning of the entire Internet are undecidable with traditional ethics. I regularly scan every device on the IPv4 Internet, including your own home router. If you paid attention to the packets your firewall drops, some of them would be from me. Some consider this a gross violation of basic ethics and get very upset that I’m scanning their computer. Others consider this to be the expected consequence of the end-to-end nature of the public Internet, that there’s an inherent social contract that you must be prepared to receive any packet from anywhere. Kerckhoff’s Principle from the 1800s suggests that core ethic of cybersecurity is exposure to such things rather than trying to cover them up.
The point isn’t to argue whether masscanning is ethical. The point is to argue that it’s undecided, and that your IRB isn’t going to be able to answer the question better than anybody else.
But here’s the thing about masscanning: I’m honest and transparent about it. My very first scan of the entire Internet came with a tweet “BTW, this is me scanning the entire Internet”.
A lot of ethical questions in other fields comes down to honesty. If you have to lie about it or cover it up, then there’s a good chance it’s unethical.
For example, the west suffers a lot of cyberattacks from Russia and China. Therefore, as a lone wolf actor capable of hacking them back, is it ethical to do so? The easy answer is that when discovered, would you say “yes, I did that, and I’m proud of it”, or would you lie about it? I admit this is a difficult question, because it’s posed in terms of whether you’d want to evade the disapproval from other people, when the reality is that you might not want to get novichoked by Putin.
The above research is based on a lie. Lying has consequences.
The natural consequence here is that now that UMN did that study, none of the patches they submit can be trusted. It’s not just this one submitted patch. The kernel maintainers are taking scorched earth response, reverting all recent patches from the university and banning future patches from them. It may be a little hysterical, but at the same time, this is a new situation that no existing policy covers.
I partly disagree with the kernel maintainer’s conclusion that the patches “obviously were _NOT_ created by a static analysis tool”. This is exactly the sort of noise static analyzers have produced in the past. I reviewed the source file for how a static analyzer might come to this conclusion, and found it’s exactly the sort of thing it might produce.
But at the same time, it’s obviously noise and bad output. If the researcher were developing a static analyzer tool, they should understand that this is crap noise and bad output from the static analyzer. They should not be submitting low-quality patches like this one. The main concern that researchers need to focus on for static analysis isn’t increasing detection of vulns, but decreasing noise.
In other words, the debate here is whether the researcher is incompetent or dishonest. Given that UMN has practiced dishonesty in the past, it’s legitimate to believe they are doing so again. Indeed, “static analysis” research might also include research in automated ways to find subversive bugs. One might create a static analyzer to search code for ways to insert a NULL pointer check to add a vuln.
Now incompetence is actually a fine thing. That’s the point of research, is to learn things. Starting fresh without all the preconceptions of old work is also useful. That researcher has problems today, but a year or two from now they’ll be an ultra-competent expert in their field. That’s how one achieves competence — making mistakes, lots of them.
But either way, the Linux kernel maintainer response of “we are not part of your research project” is a valid. These patches are crap, regardless of which research project they are pursuing (static analyzer or malicious patch submissions).
Conclusion

I think the UMN research into bad-faith patches is useful to the community. I reject the idea that their IRB, which is focused on biomedical ethics rather than cybersecurity ethics, would be useful here. Indeed, it’s done the reverse: IRB approval has tainted the entire university with the problem rather than limiting the fallout to just the researchers that could’ve been disavowed.
The natural consequence of being dishonest is that people can’t trust you. In cybersecurity, trust is hard to win and easy to lose — and UMN lost it. The researchers should have understand that “dishonesty” was going to be a problem.
I’m not sure there is a way to ethically be dishonest, so I’m not sure how such useful research can be done without the researchers or sponsors being tainted by it. I just know that “dishonesty” is an easily recognizable issue in cybersecurity that needs to be avoided. If anybody knows how to be ethically dishonest, I’d like to hear it.
Update: This person proposes a way this research could be conducted to ethically be dishonest:

A quick FAQ about NFTs

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/03/a-quick-faq-about-nfts.html

I thought I’d write up 4 technical questions about NFTs. They may not be the ones you ask, but they are the ones you should be asking. The questions:

  • What does the token look like?
  • How does it contain the artwork? (or, where is the artwork contained?)
  • How are tokens traded? (How do they get paid? How do they get from one account to another?)
  • What does the link from token to artwork mean? Does it give copyrights?
I’m going to use 4 sample tokens that have been sold for outrageous prices as examples.

#1 What does the token look like?

An NFT token has a unique number, analogous to:

  • your social security number (SSN#)
  • your credit card number
  • the VIN# on your car
  • the serial number on a dollar bill
  • etc.

This unique number is composed of two things:

  • the contract number, identifying the contract that manages the token
  • the unique token identifier within that contract
Here are some example tokens, listing the contract number (the long string) and token ID (short number), as well as a link to a story on how much it sold for recently.

With these two numbers, you can go find the token on the blockchain, and read the code to determine what the token contains, how it’s traded, its current owner, and so on.

#2 How do NFTs contain artwork? or, where is artwork contained?

Tokens can’t*** contain artwork — art is too big to fit on the blockchain. That Beeple piece is 300-megabytes in size. Therefore, tokens point to artwork that is located somewhere else than the blockchain.

*** (footnote) This isn’t actually true. It’s just that it’s very expensive to put artwork on the blockchain. That Beeple artwork would cost about $5million to put onto the blockchain. Yes, this less than a tenth the purchase price of $69million, but when you account for all the artwork for which people have created NFTs, the total exceeds the prices for all NFTs.

So if artwork isn’t on the blockchain, where is it located? and how do the NFTs link to it?

Our four examples of NFT mentioned above show four different answers to this question. Some are smart, others are stupid — and by “stupid” I mean “tantamount to fraud”.

The correct way to link a token with a piece of digital art is through a hash, which can be used with the decentralized darknet.

hash is a unique cryptographic “key” (sic) generated from the file contents. No two files with different contents (or different lengths) will generate the same hash. A hacker can’t create a different file that generates the same hash. Therefore, the hash becomes the identity of the file — if you have a hash and a file, you can independently verify the two match.

The hash (and therefore unique identity) of the Beeple file is the following string:

QmXkxpwAHCtDXbbZHUwqtFucG1RMS6T87vi1CdvadfL7qA

With the hash, it doesn’t matter where the file is located right now in cyberspace. It only matters that at some point in the future, when the owner of the NFT wants to sell it, they can produce the file which provably matches the hash.

To repeat: because of the magic of cryptographic hashes, the artwork in question doesn’t have to be located anywhere in particular.

However, people do like having a live copy of the file available in a well known location. One way of doing this is with the darknet, which is essentially a decentralized version of the web. In much the same way the blockchain provides decentralized transactions, darknet services provide decentralized file sharing. The most famous of such services is BitTorrent. The most popular for use with NFTs is known as IPFS (InterPlanetary File System). A hash contained within an NFT token often links to the IPFS system.

In the $69million Beeple NFT, this link is:

ipfs://ipfs/QmPAg1mjxcEQPPtqsLoEcauVedaeMH81WXDPvPx3VC5zUz

Sharp eyed readers will notice the hash of the artwork (above) doesn’t match the hash in this IPFS link.

That’s because the NFT token points to a metadata file that contains the real hash, along with other information about the artwork. The QmPAg…. hash points to metadata that contains the QmXkx… hash.

But a chain of hashes in this manner is still just as secure as a single hash — indeed, that’s what the “blockchain” is — a hash chain. In the future, when the owner sells this NFT, they’ll need to provide both files, the metadata and the artwork, to conclusively transfer ownership.

Thus, in answer to the question of where the artwork is located (in the NFT? on the web?), the answer is often that the NFT token contains a hash pointing to the darknet.

Let’s look at another token on our list, the $180k AP artwork. The NFT links to the following URL:

https://ap-nft.everipedia.org/api/presidential-2020/1

Like the above example with Beeple, this too points to a metadata file, with a link to the eventual artwork (here). However, this chain is broken in the middle with that URL — it isn’t decentralized, and there’s no guarantee in the future that it’ll exist. The company “Everipedia” could go out of business tomorrow, or simply decide to stop sharing the file to the web, or decide to provide a different file at that location. In these cases, the thing the NFT points to disappears.

In other words, 50 years from now, after WW III and we’ve all moved to the off-world colonies, the owner of Beeple’s NFT will still be able to sell it, providing the two additional files. The owner of this AP NFT probably won’t — the link will probably have disappeared from the web — they won’t be able to prove that the NFT they control points to the indicated artwork.

I would call this tantamount to fraud — almost. The information is all there for the buyer to check, so they know the problems with this NFT. They obviously didn’t care — maybe they plan on being able to offload the NFT onto another buyer before the URL disappears.

Now let’s look at the CryptoPunks #7804 NFT. The contract points to the same hash of an image file that contains all 10,000 possible token images. That hash is the following. Click on it to see the file it maps to:

ac39af4793119ee46bbff351d8cb6b5f23da60222126add4268e261199a2921b

The token ID in question is #7804. If you look in that file for the 7804th face, you’ll see which one the token matches.

Unfortunately, the original contract doesn’t actually explain how we arrive at the 7804th sub-image. Do we go left to right? Top down? or some other method? Currently, there exists a website that does the translation using one algorithm, but in the future, there’s no hard proof which token maps to which face inside that massive image.

Now let’s look at the CryptoKitty #896775 . In this case, there’s no hashes involved, and no image. Instead, each kitty is expressed as a pattern of “genes”, with contracts that specify how to two kittens can breed together to create a new kitty’s genes. The above token contains the gene sequence:

235340506405654824796728975308592110924822688777991068596785613937685997

There are other contracts on the blockchain that can interact with this. 

The CryptoKitty images we see are generated by an algorithm that reads the gene sequence. Thus, there is no image file, no hash of a file. The algorithm that does this is located off-chain, so again we have the problem that in the future, the owner of the token may not be able to prove ownership of the correct image.

So what we see in these examples is one case where there’s a robust hash chain linking the NFT with the corresponding image file, and three examples where the link is problematic — ranging from slightly broken to almost fraudulent.

#3 How are tokens traded?

There are two ways you can sell your NFTs:

  • off the blockchain
  • on the blockchain

The Beeple artwork was sold through Christie’s — meaning off blockchain. Christies conducted the bidding and collected the payment, took its cut, and gave the rest to the artist. The artist then transferred the NFT. We can see this on the blockchain where Beeple transferred the NFT for $0, but we can’t see the flow of money off blockchain.

This is the exception. The rule is that NFTs are supposed to be traded on blockchain.

NFT contracts don’t have auction or selling capabilities themselves. Instead, they follow a standard (known as ERC721) that allows them to be managed by other contracts. A person controlling a token selects some other auction/selling contract that matches the terms they want, and gives control to that contract.

Because contracts are code, both sides are know what the terms are, and can be confident they won’t be defrauded by the other side.

For example, a contract’s terms might be to provide for bids over 5 days, transfer the NFT from the owner to the buyer, and transfer coins from the buyer to the previous owner.

This is really why NFTs are so popular: not ownership of artwork, but on blockchain buying and selling of tokens. It’s the ability to conduct such commerce where the rules are dictated by code rather than by humans, where such transfers happen in a decentralized manner rather than through a central authority that can commit fraud.

So the upshot is that if you own an NFT, you can use the Transfer() function to transfer it to some other owner, or you can authorize some other contract to do the selling for you, which will eventually call this Transfer() function when the deal is done. Such a contract will likely also transfer coins in the other direction, paying you for your token.

#4 What does this all mean?

If you break into the Louvre Museum and steal the Mona Lisa, you will control the artwork. But you won’t own it. The word “ownership” is defined to mean your legal rights over the object. If the legal authorities catch up with you, they’ll stick you in jail and transfer control of the artwork back to the rightful legal owner.

We keep talking about “ownership” of NFTs, but this is fiction. Instead, all that you get when you acquire an NFT is “control” — control of just the token even, and not of the underlying artwork. Much of what happens in blockchain/cryptocurrencies isn’t covered by the law. Therefore, you can’t really “own” tokens. But you certainly control them (with the private key in your wallet that matches the public key of your account/address on the blockchain).

This is why NFTs are problematic, people are paying attention to the fiction (“ownership”) and not the technical details (“control”). We see that in the AP artwork above which simply links to a URL instead of a hash, missing a crucial step. They weren’t paying attention to the details.

There are other missing steps. For example, I can create my own NFTs representing all these artworks and sell them (maybe covered in a future blogpost). It’s a fiction that one of these is valid and my copy NFTs are invalid.

On the other hand, this criticism can go too far. Some people claim the entire blockchain/cryptocurrency market is complete fiction. This isn’t true — there’s lots of obvious value in transactions that are carried out by code rather than by humans.

For example, an oil company might sell tokens for oil futures, allowing people to trade such futures on the blockchain. Ultimately, though, the value of such tokens comes down to faith in the original issuer that they’ll deliver on the promise — that the controller of the token will eventually get something in the real world. There are lots of companies being successful with this sort of thing, such as the BAT token used in the “Brave” web browser that provides websites with micropayment revenue instead of advertising revenue.

Thus, the difference here is that cryptocurrencies are part fiction, part real — tied to real world things. But NFTs representing artwork are pretty much completely fiction. They confer no control over the artwork in the real world. Whatever tie a token has to the artwork is purely in your imagination.

Deconstructing that $69million NFT

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/03/deconstructing-that-69million-nft.html

“NFTs” have hit the mainstream news with the sale of an NFT based digital artwork for $69 million. I thought I’d write up an explainer. Specifically, I deconstruct that huge purchase and show what actually was exchanged, down to the raw code. (The answer: almost nothing).

The reason for this post is that every other description of NFTs describe what they pretend to be. In this blogpost, I drill down on what they actually are.

Note that this example is about “NFT artwork”, the thing that’s been in the news. There are other uses of NFTs, which work very differently than what’s shown here.

tl;dr

I have long bit of text explaining things. Here is the short form that allows you to drill down to the individual pieces.

  • Beeple created a piece of art in a file
  • He created a hash that uniquely, and unhackably, identified that file
  • He created a metadata file that included the hash to the artwork
  • He created a hash to the metadata file
  • He uploaded both files (metadata and artwork) to the IPFS darknet decentralized file sharing service
  • He created, or minted a token governed by the MakersTokenV2 smart contract on the Ethereum blockchain
  • Christies created an auction for this token
  • The auction was concluded with a payment of $69 million worth of Ether cryptocurrency. However, nobody has been able to find this payment on the Ethereum blockchain, the money was probably transferred through some private means.
  • Beeple transferred the token to the winner, who transferred it again to this final Metakovan account
Each of the link above allows you to drill down to exactly what’s happening on the blockchain. The rest of this post discusses things in long form.

Why do I care?

Well, you don’t. It makes you feel stupid that you haven’t heard about it, when everyone is suddenly talking about it as if it’s been a thing for a long time. But the reality, they didn’t know what it was a month ago, either. Here is the Google Trends graph to prove this point — interest has only exploded in the last couple months:

The same applies to me. I’ve been aware of them (since the CryptoKitties craze from a couple years ago) but haven’t invested time reading source code until now. Much of this blogpost is written as notes as I discover for myself exactly what was purchased for $69 million, reading the actual transactions.

So what is it?

My definition: “Something new that can be traded on a blockchain that isn’t a fungible cryptocurrency”.
In this post, I’m going to explain in technical details. Before this, you might want to pause and see what everyone else is saying about it. You can look on Wikipedia to answer that question, or look at the following definition from CNN (the first result when I google it):

Non-fungible tokens, or NFTs, are pieces of digital content linked to the blockchain, the digital database underpinning cryptocurrencies such as bitcoin and ethereum. Unlike NFTs, those assets are fungible, meaning they can be replaced or exchanged with another identical one of the same value, much like a dollar bill.

You can also get a list of common NFT systems here. While this list of NFT systems contains a lot of things related to artwork (as described in this blogpost), a lot aren’t. For example, CryptoKitties is an online game, not artwork (though it too allows ties to pictures of the kitties).

What is fungible?

Let’s define the word fungible first. The word refers to goods you purchase that can be replaced by an identical good, like a pound of sugar, an ounce of gold, a barrel of West Texas Intermediate crude oil. When you buy one, you don’t care which one you get.

In contrast, an automobile is a non-fungible good — if you order a Tesla Model 3, you won’t be satisfied with just any car that comes out of the factory, but one that matches the color and trim that you ordered. Art work is a well known non-fungible asset — there’s only one Mona Lisa painting in the world, for example.

Dollar bills and coins are fungible tokens — they represent the value printed on the currency. You can pay your bar bill with any dollars. 

Cryptocurrencies like Bitcoin, ZCash, and Ethereum are also “fungible tokens”. That’s where they get their value, from their fungibility.

NFTs, or non-fungible tokens, is the idea of trading something unique (non-fungible, not the same as anything else) on the blockchain. You can trade them, but each is unique, like a painting, a trading card, a rare coin, and so on.

This is a token  — it represents a thing. You aren’t trading an artwork itself on the blockchain, but a token that represents the artwork. I mention this because most descriptions about NFTs are that you are buying artwork — you aren’t. Instead, you are buying a token that points to the artwork.

The best real world example is a receipt for purchase. Let’s say you go to the Louvre and buy the Mona Lisa painting, and they give you a receipt attesting to the authenticity of the transaction. The receipt is not the artwork itself, but something that represents the artwork. It’s proof you legitimately purchased it — that you didn’t steal it. If you ever resell the painting, you’ll probably need something like this proving the provenance of the piece.

Show me an example!

So let’s look an at an example NFT, the technical details, to see how it works. We might as well use this massive $69 million purchase as our example. Some news reports describing the purchase are here: [1] [2] [3].

None of these stories say what actually happened. They say the “artwork was purchased”, but what does that actually mean? We are going to deconstruct that here. (The answer is: the artwork wasn’t actually purchased).


What was the artwork?

It’s a piece created by an artist named “Beeple” (Mike Winkelmann), called “Everydays: The First 5000 Days“. It’s a 500-megapixel image, which is about 300-megabytes in size. A thumbnail of this work is shown below.

So the obvious question is where is this artwork? Is it somewhere on the blockchain? Well, no, the file is 300-megabytes in size, much too large to put on the blockchain. Instead, the file exists somewhere out in cyberspace (described below).
What exists on the blockchain is a unique fingerprint linking to the file, known as a hash.
What is a hash?

It’s at this point we need to discuss cryptography: it’s not just about encryption, but also random numbers, public keys, and hashing.

A “hash” passes all the bytes of a file through an algorithm to generate a short signature or fingerprint unique to that file. No two files with different contents can have the same hash. The most popular algorithm is SHA-256, which produces a 256-bit hash.

We call it a cryptographic hash to differentiate it from weaker algorithms. With a strong algorithm, it’s essentially impossible for a hacker to create a different file that has the same hash — even if the hacker tried really hard.

Thus, the hash is the identity of the file. The identity of the artwork in question is not the title of the piece mentioned above, other pieces of art can also be given that title. Instead, the identity of the artwork is its hash. Other pieces of artwork cannot have the same hash.

For this artwork, that 300-megabyte file is hashed, producing a 256-bit value. Written in hex, this value is:

6314b55cc6ff34f67a18e1ccc977234b803f7a5497b94f1f994ac9d1b896a017

Hexadecimal results in long strings. There are shorter ways of representing hashes. One is a format called MultiHash. It’s value is shown below. This refers to the same 256-bits, and thus the two forms equivalent, they are simply displayed in different ways.

QmXkxpwAHCtDXbbZHUwqtFucG1RMS6T87vi1CdvadfL7qA

This is the identity of the artwork. If you want to download the entire 300-megabyte file, simply copy and paste that into google, and it’ll lead you to someplace in cyberspace where you can download it. Once you download it, you can verify the hash, such as with the command-line tool OpenSSL:

$ openssl dgst -sha256 everdays5000.jfif

SHA256(everdays5000.jfif)= 6314b55cc6ff34f67a18e1ccc977234b803f7a5497b94f1f994ac9d1b896a017

The above is exactly what I’ve done — I downloaded the file from cyberspace, named it “everydays5000.jfif”, and then calculated the hash to see if it matches. As you can tell by looking at my result with the above hash, they do match, so I know I have an exact copy of the artwork.


Where to download the image from cyberspace?

Above, I downloaded the file in order to demonstrate calculating the hash. It doesn’t live on the blockchain, so where does it live?

There’s two answers. The first answer is potentially anywhere in cyberspace. Thousands of people have downloaded the file onto the personal computers, so obviously it exists on their machines — you just can’t get at it. If you ever do come across it somewhere, you can always verify it’s the exact copy by looking at the hash.

The second answer is somewhere on the darknet. The term “darknet” refers to various systems on the Internet other than the web. Remember, the “web” is not the “Internet”, but simply one of many services on the Internet.

The most popular darknet services are decentralized file sharing systems like BitTorrent and IPFS. In much the same way that blockchains are decentralized transaction services, these two system are decentralized file services. When something is too big to live on the blockchain, it often lives on the darknet, usually via IPFS.

The way these services identify files is through their hashes. If you know their hash, you can stick it into one of these services and find it. Thus, if you want to find this file on IPFS, download some IPFS aware software, and plug in the hash.

There’s an alternative privacy-focused browser called “Brave” that includes darknet features (TOR, BitTorrent, and IPFS). To download this file using Brave, simply use the following URL:

ipfs://QmXkxpwAHCtDXbbZHUwqtFucG1RMS6T87vi1CdvadfL7qA

But an easier way is to use one of the many IPFS gateways. These are web servers that will copy a file off the darknet and make it available to you. Here is a URL using one of those gateways:

https://ipfsgateway.makersplace.com/ipfs/QmXkxpwAHCtDXbbZHUwqtFucG1RMS6T87vi1CdvadfL7qA

If you click on this link within your browser, you’ll download the 300-megabyte file from the IPFS darknet. It’ll take a while, the service is slow. Once you get it, you can verify the hashes match. But since the URL is based on the hash, of course they should match, unless there was some error in transmission.

So this hash is on the blockchain?

Well, it could’ve been, but it wasn’t. Instead, the hash that’s on the blockchain points to a file containing metadata — and it’s the metadata that points to the hash.

In other words, it’s a chain of hashes. The hash on the blockchain (as we’ll see below) is this one here (I’ve made it a link so you can click on it to see the raw data):

QmPAg1mjxcEQPPtqsLoEcauVedaeMH81WXDPvPx3VC5zUz

When you click on this, you see a bunch of JSON data. Below, I’ve stripped away the uninteresting stuff to show the meaningful bits;

title:”EVERYDAYS: THE FIRST 5000 DAYS” 

description:”I made a picture from start to finish every single day from May 1st, 2007 – January 7th, 2021.  This is every motherfucking one of those pictures.” 

digital_media_signature:”6314b55cc6ff34f67a18e1ccc977234b803f7a5497b94f1f994ac9d1b896a017” 

raw_media_file:”https://ipfsgateway.makersplace.com/ipfs/QmXkxpwAHCtDXbbZHUwqtFucG1RMS6T87vi1CdvadfL7qA

Now remember that due to the magic of cryptographic hashes, this chain can’t be broken. One hash leads to the next, such that changing any single bit breaks the chain. Indeed, that’s what a “blockchain” is — a hash chain. Changing any bit of information anywhere on the Bitcoin blockchain is immediately detectable, because it throws off the hash calculations.

So we have a chain: 

hash -> metadata -> hash -> artwork

So if you own the root, you own the entire chain.

Note that this chain seems unbreakable here, in this $69 million NFT token. However, in a lot of other tokens, it’s not. I mean, the hash chain itself doesn’t promise much (it simply points at the artwork, giving no control over it), but other NFTs promise even less.


So what, exactly, is the NFT that was bought and sold?

Here’s what Christie’s sold. Here’s how Christies describes it:

Beeple (b. 1981)
EVERYDAYS: THE FIRST 5000 DAYS
token ID: 40913
wallet address: 0xc6b0562605D35eE710138402B878ffe6F2E23807
smart contract address: 0x2a46f2ffd99e19a89476e2f62270e0a35bbf0756
non-fungible token (jpg)
21,069 x 21,069 pixels (319,168,313 bytes)
Minted on 16 February 2021. This work is unique.

The seller is the artist Beeple. The artist created the token (shown below) and assigned their wallet address as the owner. This is their wallet address:

0xc6b0562605D35eE710138402B878ffe6F2E23807

When Beeple created the token, he did so using a smart contract that governs the rules for the token. Such smart contracts is what makes Ethereum different from Bitcoin, allowing things to be created and managed on the blockchain other than simple currency transfers. Contracts have addresses on the blockchain, too, but no person controls them — they are rules for decentralized transfer of things, with nobody (other than the code) in control.

There are many smart contracts that can manage NFTs. The one Beeple chose is known as MakersTokenV2. This contract has the following address:

0x2a46f2ffd99e19a89476e2f62270e0a35bbf0756

Note that if you browse this link, you’ll eventually get to the code so that you can read the smart contract and see how it works. It’s a derivation of something known as ERC721 that defines the properties of a certain class of non-fungible tokens.

Finally, we get to the actual token being sold here. It is:

#40913

In other words, it’s the 40913rd token created and managed by the MakersTokenV2 contract. The full description of what Christies is selling is this token number governed by the named contract on the Ethereum blockchain:

Ethereum -> 0x2a46f2ffd99e19a89476e2f62270e0a35bbf0756 -> 40913

We have to search the blockchain in order to find the transaction that created this token. The transaction is identified by the hash:

0x84760768c527794ede901f97973385bfc1bf2e297f7ed16f523f75412ae772b3

The smart contract is code, so in the above transaction, Beeple calls functions within the contract to create a new token, assign digital media to it (the hash), and assign himself owner of the newly created token.

After doing this, the token #40913 now contains the following information:

creator : 0xc6b0562605d35ee710138402b878ffe6f2e23807

metadataPath : QmPAg1mjxcEQPPtqsLoEcauVedaeMH81WXDPvPx3VC5zUz

tokenURI : ipfs://ipfs/QmPAg1mjxcEQPPtqsLoEcauVedaeMH81WXDPvPx3VC5zUz

This is the thing that Christie’s auction house sold. As you can see in their description above, it all points to this token on the blockcahin.

Now after the auction, the next step is to transfer the token to the new owner. Again, the contract is code, so this is calling the “Transfer()” function in that code. Beeple is the only person who can do this transfer, because only he knows the private key that controls his wallet. This transfer is done in the transaction below:

0xa342e9de61c34900883218fe52bc9931daa1a10b6f48c506f2253c279b15e5bf 

token : 40913
from : 0xc6b0562605d35ee710138402b878ffe6f2e23807
to : 0x58bf1fbeac9596fc20d87d346423d7d108c5361a

That’s not the current owner. Instead, it was soon transferred again in the following transaction:

0x01d0967faaaf95f3e19164803a1cf1a2f96644ebfababb2b810d41a72f502d49 

token : 40913
from : 0x58bf1fbeac9596fc20d87d346423d7d108c5361a
to : 0x8bb37fb0f0462bb3fc8995cf17721f8e4a399629

That final address is known to belong to a person named “Metakovan”, who the press has identified as the buyer of the piece. I don’t know what that intermediary address between Beeple and Metakovan was, but it’s common in the cryptocurrency world to have many accounts that people transfer things between, so I bet it also belongs to Metakovan.

How are things transferred?

Like everything on the blockchain, control is transfered via public/private keys. Your wallet address is a hash of your public key, which everyone knows. Anybody can transfer something to your public address without you being involved.

But every public key has a matching private key. Both are generated together, because they are mathematically related. Only somebody who knows the private key that matches the wallet address can transfer something out of the wallet to another person.

Thus Beeple’s account as the following public address. But we don’t know his private key, which he has stored on a computer file somewhere.

0xc6b0562605D35eE710138402B878ffe6F2E23807

To summarize what was bought and sold

So that’s it. To summarize:

  • Beeple created a piece of art in a file
  • He created a hash that uniquely, and unhackably, identified that file
  • He created a metadata file that included the hash to the artwork
  • He created a hash to the metadata file
  • He uploaded both files (metadata and artwork) to the IPFS darknet decentralized file sharing service
  • He created, or minted a token governed by the MakersTokenV2 smart contract on the Ethereum blockchain
  • Christies created an auction for this token
  • The auction was concluded with a payment of $69 million worth of Ether cryptocurrency. However, nobody has been able to find this payment on the Ethereum blockchain, the money was probably transferred through some private means.
  • Beeple transferred the token to the winner, who transferred it again to this final Metakovan account
And that’s it.
Okay, I understand. But I have a question. WHAT IS AN NFT????

So if you’ve been paying attention, and understood everything I’ve said, then you should still be completely confused. What exactly was purchased that was worth $69 million?
If we are asking what Metakovan purchased for his $69 million, it comes down to this: the ability to transfer MakersTokenV2 #40913 to somebody else.
That’s it. That’s everything he purchased. He didn’t purchase the artwork, he didn’t purchase the copyrights, he didn’t purchase anything more than the ability to transfer that token. Even saying he owns the token is a misnomer, since the token lives on the blockchain. Instead, since only Metakovan knows the private key that controls his wallet, all that he possesses is the ability to transfer the token to the control of another private key.
It’s not even as unique as people claim. Beeple can mint another token for the same artwork. Anybody else can mint a token for Beeple’s artwork. Insignificant changes can be made to that artwork, and tokens can be minted for that, too. There’s nothing hard and fast controlled by the code — the relationship is in people’s minds.
If you are coming here asking why somebody thinks this is worth $69 million, I have no answer for you.
The conclusion

I think there are two things that are clear here:
  • This token is not going to be meaningful to most of us: who cares if the token points to a hash that eventually points to a file freely available on the Internet?
  • This token is meaningful to those in the “crypto” (meaning “cryptocurrency”) community, but it’s in their minds, rather than something hard and fast controlled by code or cryptography.
In other words, the work didn’t sell for $69 million of real money.
For one thing, it’s not the work that was traded, or rights or control over that work. It’s simply a token that pointed to the work.
For another thing, it was sold for 42329.453 ETH, not $dollars. Early adopters with lots of cryptocurrency are likely to believe the idea that the token is meaningful, whereas outsiders with $dollars don’t.
An NFT is ultimately like those plaques you see next to paintings in a museum telling people about the donor or philanthropist involved — only this plaque is somewhere where pretty much nobody will see it.

We are living in 1984 (ETERNALBLUE)

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/02/we-are-living-in-1984-eternalblue.html

In the book 1984, the protagonist questions his sanity, because his memory differs from what appears to be everybody else’s memory.

The Party said that Oceania had never been in alliance with Eurasia. He, Winston Smith, knew that Oceania had been in alliance with Eurasia as short a time as four years ago. But where did that knowledge exist? Only in his own consciousness, which in any case must soon be annihilated. And if all others accepted the lie which the Party imposed—if all records told the same tale—then the lie passed into history and became truth. ‘Who controls the past,’ ran the Party slogan, ‘controls the future: who controls the present controls the past.’ And yet the past, though of its nature alterable, never had been altered. Whatever was true now was true from everlasting to everlasting. It was quite simple. All that was needed was an unending series of victories over your own memory. ‘Reality control’, they called it: in Newspeak, ‘doublethink’.

I know that EternalBlue didn’t cause the Baltimore ransomware attack. When the attack happened, the entire cybersecurity community agreed that EternalBlue wasn’t responsible.

But this New York Times article said otherwise, blaming the Baltimore attack on EternalBlue. And there are hundreds of other news articles [eg] that agree, citing the New York Times. There are no news articles that dispute this.

In a recent book, the author of that article admits it’s not true, that EternalBlue didn’t cause the ransomware to spread. But they defend themselves as it being essentially true, that EternalBlue is responsible for a lot of bad things, even if technically, not in this case. Such errors are justified, on the grounds they are generalizations and simplifications needed for the mass audience.

So we are left with the situation Orwell describes: all records tell the same tale — when the lie passes into history, it becomes the truth.

Orwell continues:

He wondered, as he had many times wondered before, whether he himself was a lunatic. Perhaps a lunatic was simply a minority of one. At one time it had been a sign of madness to believe that the earth goes round the sun; today, to believe that the past is inalterable. He might be ALONE in holding that belief, and if alone, then a lunatic. But the thought of being a lunatic did not greatly trouble him: the horror was that he might also be wrong.

I’m definitely a lunatic, alone in my beliefs. I sure hope I’m not wrong.


Update: Other lunatics document their struggles with Minitrue:

Review: Perlroth’s book on the cyberarms market

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/02/review-perlroths-book-on-cyberarms.html

New York Times reporter Nicole Perlroth has written a book on zero-days and nation-state hacking entitled “This Is How They Tell Me The World Ends”. Here is my review.

I’m not sure what the book intends to be. The blurbs from the publisher implies a work of investigative journalism, in which case it’s full of unforgivable factual errors. However, it reads more like a memoir, in which case errors are to be expected/forgivable, with content often from memory rather than rigorously fact checked notes.

But even with this more lenient interpretation, there are important flaws that should be pointed out. For example, the book claims the Saudi’s hacked Bezos with a zero-day. I claim that’s bunk. The book claims zero-days are “God mode” compared to other hacking techniques, I claim they are no better than the alternatives, usually worse, and rarely used.

But I can’t really list all the things I disagree with. It’s no use. She’s a New York Times reporter, impervious to disagreement.

If this were written by a tech journalist, then criticism would be the expected norm. Tech is full of factual truths, such as whether 2+2=5, where it’s possible for a thing to be conclusively known. All journalists make errors — tech journalists are constantly making small revisions correcting their errors after publication.

The best example of this is Ars Technica. They pride themselves on their reader forums, where readers comment, opine, criticize, and correct stories. Sometimes readers add more interesting information to the story, providing free content to other readers. Sometimes they fix errors.

It’s often unpleasant for the journalists who steel themselves after hitting “Submit…”. They have a lot of practice defending or correcting every assertion they make, from both legitimate and illegitimate criticism. This makes them astoundingly good journalists — mistakes editors miss readers don’t. They get trained fast to deal with criticism.

The mainstream press doesn’t have this tradition. To be fair, it couldn’t. Tech forums have techies with knowledge and experience, while the mainstream press has ignorant readers with opinions. Regardless of the story’s original content it’ll devolve into people arguing about whether Epstein was murdered (for example).

Nicole Perlroth is a mainstream reporter on a techy beat. So you see a conflict here between the expectation both sides have for each other. Techies expect a tech journalist who’ll respond to factual errors, she doesn’t expect all this criticism. She doesn’t see techie critics for what they are — subject matter experts that would be useful sources to make her stories better. She sees them as enemies that must be ignored. This makes her stories sloppy by technical standards. I hate that this sounds like a personal attack when it’s really more a NYTimes problem — most of their cyber stories struggle with technical details, regardless of author.

This problem is made worse by the fact that the New York Times doesn’t have “news stories” so much as “narratives”. They don’t have neutral stories reporting what happened, but narratives explaining a larger point.

A good example is this story that blames the Baltimore ransomware attack on the NSA’s EternalBlue. The narrative is that EternalBlue is to blame for damage all over the place, and it uses the Baltimore ransomware as an example. However, EternalBlue wasn’t responsible for that particular ransomware — as techies point out.

Perlroth doesn’t fix the story. In her book, she instead criticizes techies for focusing on “the technical detail that in this particular case, the ransomware attack had not spread with EternalBlue”, and that techies don’t acknowledge “the wreckage from EternalBlue in towns and cities across the country”.

It’s a bizarre response from a journalist, refusing to fix a falsehood in a story because the rest of the narrative is true.

Some of the book is correct, telling you some real details about the zero-day market. I can’t say it won’t be useful to some readers, though the useful bits are buried in a lot of non-useful stuff. But most of the book is wrong about the zero-day market, a slave to the narrative that zero-days are going to end the world. I mean, I should say, I disagree with the narrative and her political policy ideas — I guess it’s up to you to decide for yourself if it’s “wrong”. Apart from inaccuracies, a lot is missing — for example, you really can’t understand what a “zero-day” is without also understanding the 40 year history of vuln-disclosure.

I could go on a long spree of corrections, and others have their own long list of inaccuracies, but there’s really no point. She’s already defended her book as being more of a memoir than a work of journalistic integrity, so her subjective point of view is what it’s about, not facts. Her fundamental narrative of the Big Bad Cyberarms Market is a political one, so any discussion of accuracy will be in service of political sides rather than the side of truth.

Moreover, she’ll just attack me for my “bruised male ego”, as she has already done to other expert critics.


No, 1,000 engineers were not needed for SolarWinds

Post Syndicated from Robert Graham original https://blog.erratasec.com/2021/02/no-1000-engineers-were-not-needed-for.html

Microsoft estimates it would take 1,000 to carry out the famous SolarWinds hacker attacks. This means in reality that it was probably fewer than 100 skilled engineers. I base this claim on the following Tweet:

Yes, it would take Microsoft 1,000 engineers to replicate the attacks. But it takes a large company like Microsoft 10-times the effort to replicate anything. This is partly because Microsoft is a big, stodgy corporation. But this is mostly because this is a fundamental property of software engineering, where replicating something takes 10-times the effort of creating the original thing.

It’s like painting. The effort to produce a work is often less than the effort to reproduce it. I can throw some random paint strokes on canvas with almost no effort. It would take you an immense amount of work to replicate those same strokes — even to figure out the exact color of paint that I randomly mixed together.

Software Engineering

The process of software engineering is about creating software that meets a certain set of requirements, or a specification. It is an extremely costly process verify the specification is correct. It’s like if you build a bridge but forget a piece and the entire bridge collapses.

But code slinging by hackers and open-source programmers works differently. They aren’t building toward a spec. They are building whatever they can and whatever they want. It takes a tenth, or even a hundredth of the effort of software engineering. Yes, it usually builds things that few people (other than the original programmer) want to use. But sometimes it produces gems that lots of people use.

Take my most popular code slinging effort, masscan. I spent about 6-months of total effort writing it at this point. But if you run code analysis tools on it, they’ll tell you that it would take several millions of dollars to replicate the amount of code I’ve written. And that’s just measuring the bulk code, not the numerous clever capabilities and innovations in the code.

According to these metrics, I’m either a 100x engineer (a hundred times better than the average engineer) or my claim is true that “code slinging” is a fraction of the effort of “software engineering”.

The same is true of everything the SolarWinds hackers produced. They didn’t have to software engineer code according to Microsoft’s processes. They only had to sling code to satisfy their own needs. They don’t have to train/hire engineers with the skills necessary to meet a specification, they can write the specification according to what their own engineers can produce. They can do whatever they want with the code because they don’t have to satisfy somebody else’s needs.

Hacking

Something is similarly true with hacking. Hacking a specific target, a specific way, is very hard. Hacking any target, any way, is easy.

Like most well-known hackers, I regularly get those emails asking me to hack somebody’s Facebook account. This is very hard. I can try a lot of things, and in the end, chances are I cannot succeed. On the other hand, if you ask me to hack anybody’s Facebook account, I can do that in seconds. I can download one of the many hacker dumps of email addresses, then try to log into Facebook with every email address using the password “Password1234”. Eventually I’ll fine somebody who has that password — I just don’t know who.

Hacking is overwhelmingly opportunistic. Hackers go into it not being sure who they’ll hack, or how they’ll hack. They just try a bunch of things against a bunch of targets and see what works. No two hacks are the same. You can’t look at one hack and reproduce it exactly against another target.

Well, you reproduce things a bit. Some limited techniques have become “operationalized”. A good example is “phishing”, sending emails tricking people into running software or divulging a password. But that’s usually only the start of a complete attack, getting the initial foothold into a target, rather than the full hack itself.

In other words, hacking is based a lot on luck. You can create luck for yourself by trying lots of things. But it’s hard reproducing luck.

This principle of hacking is why Stuxnet is such an incredible achievement. It wasn’t opportunistic hacking. It had a very narrow target that could only be hacked in a very narrow way, jumping across an “airgap” to infect the controllers into order to subtly destabilize the uranium centrifuges. With my lifetime experience with hacking, I’m amazed at Stuxnet.

But SolarWinds was no Stuxnet. Instead, it shows a steady effort over a number of years, capitalizing on the lucky result of one step to then move ahead to the next step. Replicating that chain of luck would be nearly impossible.

Business

Now let’s talk about big companies vs. startups. Every month, big companies like Apple, Microsoft, Cisco, etc. are acquiring yet another small startup that has done something that a big company cannot do. These companies often have small (but growing) market share, so it’s rarely for the market share alone that big companies acquire small ones.

Instead, it’s for the thing that the startup produced. The reason big companies acquire outsiders is again because of the difficulty that insiders would have in reproducing the work. The engineering managers are asked how much it would cost insiders to reproduce the work of the outsiders, the potential acquisition candidate. The answer is almost always “at least 10-times more than what the small company invested in building the thing”.

This is reflected by the purchase price, which is often 10-times what the original investors put into the company to build the thing. In other words, Microsoft regularly buys a company for 10-times than all the money the original investors put into the company — meaning much more than 10-times the effort it would take for their own engineers to replicate the product in question.

Thus, the question people should ask Brad Smith of Microsoft is not simply how many skilled Microsoft engineers it would take to reproduce SolarWinds, but also how many skilled Microsoft engineers it would take to reproduce the engineer effort of their last 10 acquisitions.

Conclusion

I’ve looked at the problem three different ways, from the point of view of software engineering, hacking, or business. If it takes 1,000 Microsoft engineers to reproduce the SolarWinds hacks, then that means there’s fewer than 100 skilled engineers involved in the actual hacks.

SolarWinds is probably the most consequential hack of the last decade. There are many eager to exaggerate things to serve their own agenda. Those types have been pushing this “1,000 engineer” claim. I’m an expert in all three these areas, software engineering, hacking, and business. I’ve written millions of lines of code, I’ve well known for my hacking, and I’ve sold startups. I can assure you: Microsoft’s estimate means that likely fewer than 100 skilled engineers were involved.

The deal with DMCA 1201 reform

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/12/the-deal-with-dmca-1201-reform.html

There are two fights in Congress now against the DMCA, the “Digital Millennium Copyright Act”. One is over Section 512 covering “takedowns” on the web. The other is over Section 1201 covering “reverse engineering”, which weakens cybersecurity.

Even before digital computers, since the 1880s, an important principle of cybersecurity has been openness and transparency (“Kerckhoff’s Principle”). Only through making details public can security flaws be found, discussed, and fixed. This includes reverse-engineering to search for flaws.

Cybersecurity experts have long struggled against the ignorant who hold the naive belief we should instead coverup information, so that evildoers cannot find and exploit flaws. Surely, they believe, given just anybody access to critical details of our security weakens it. The ignorant have little faith in technology, that it can be made secure. They have more faith in government’s ability to control information.

Technologists believe this information coverup hinders well-meaning people and protects the incompetent from embarrassment. When you hide information about how something works, you prevent people on your own side from discovering and fixing flaws. It also means that you can’t hold those accountable for their security, since it’s impossible to notice security flaws until after they’ve been exploited. At the same time, the information coverup does not do much to stop evildoers. Technology can work, it can be perfected, but only if we can search for flaws.

It seems counterintuitive the revealing your encryption algorithms to your enemy is the best way to secure them, but history has proven time and again that this is indeed true. Encryption algorithms your enemy cannot see are insecure. The same is true of the rest of cybersecurity.

Today, I’m composing and posting this blogpost securely from a public WiFi hotspot because the technology is secure. It’s secure because of two decades of security researchers finding flaws in WiFi, publishing them, and getting them fixed.

Yet in the year 1998, ignorance prevailed with the “Digital Millennium Copyright Act”. Section 1201 makes reverse-engineering illegal. It attempts to secure copyright not through strong technological means, but by the heavy hand of government punishment.

The law was not completely ignorant. It includes an exception allow what it calls “security testing” — in theory. But that exception does not work in practice, imposing too many conditions on such research to be workable.

The U.S. Copyright Office has authority under the law to add its own exemptions every 3 years. It has repeatedly added exceptions for security research, but the process is unsatisfactory. It’s a protracted political battle every 3 years to get the exception back on the list, and each time it can change slightly. These exemptions are still less than what we want. This causes a chilling effect on permissible research. It would be better if such exceptions were put directly into the law.

You can understand the nature of the debate by looking at those on each side.

Those lobbying for the exceptions are those trying to make technology more secure, such as Rapid7, Bugcrowd, Duo Security, Luta Security, and Hackerone. These organizations have no interest in violating copyright — their only concern is cybersecurity, finding and fixing flaws.

The opposing side includes the copyright industry, as you’d expect, such as the “DVD” association who doesn’t want hackers breaking the DRM on DVDs.

However, much of the opposing side has nothing do with copyright as such.

This notably includes the three major voting machine suppliers in the United States: Dominion Voting, ES&S, and Hart InterCivic. Security professionals have been pointing out security flaws in their equipment for the past several years. These vendors are explicitly trying to coverup their security flaws by using the law to silence critics.

This goes back to the struggle mentioned at the top of this post. The ignorant and naive believe that we need to coverup information, so that hackers can’t discover flaws. This is expressed in their filing opposing the latest 3-year exemption:

The proponents are wrong and misguided in their argument that the Register’s allowing independent hackers unfettered access to election software is a necessary – or even appropriate – way to address the national security issues raised by election system security. The federal government already has ways of ensuring election system security through programs conducted by the EAC and DHS. These programs, in combination with testing done in partnership between system providers, independent voting system test labs and election officials, provide a high degree of confidence that election systems are secure and can be used to run fair and accurate elections. Giving anonymous hackers a license to attack critical infrastructure would not serve the public interest. 

Not only does this blatantly violate Kerckhoff’s Principle stated above, it was proven a fallacy in the last two DEF CON cybersecurity conferences. These conferences bought voting machines off eBay and presented them at the conference for anybody to hack. Widespread and typical vulnerabilities were found. These systems were certified as secure by state and federal governments, yet teenagers were able to trivially bypass the security of these systems.

The danger these companies are afraid of is not a nation state actor being able to play with these systems, but of teenagers playing with their systems at DEF CON embarrassing them by pointing out their laughable security. This proves Kerckhoff’s Principle.

That’s why the leading technology firms take the opposite approach to security than election systems vendors. This includes Apple, Amazon, Microsoft, Google, and so on. They’ve gotten over their embarrassment. They are every much as critical to modern infrastructure as election systems or the power grid. They publish their flaws roughly every month, along with a patch that fixes them. That’s why you end up having to patch your software every month. Far from trying to coverup flaws and punish researchers, they publicly praise researchers, and in many cases, offer “bug bounties” to encourage them to find more bugs.

It’s important to understand that the “security research” we are talking about is always “ad hoc” rather than formal.

These companies already do “formal” research and development. They invest billions of dollars in securing their technology. But no matter how much formal research they do, informal poking around by users, hobbyists, and hackers still finds unexpected things.

One reason is simply a corollary to the Infinite Monkey Theorem that states that an infinite number of monkeys banging on an infinite number of typewriters will eventually reproduce the exact works of William Shakespeare. A large number of monkeys banging on your product will eventually find security flaws.

A common example is a parent who brings their kid to work, who then plays around with a product doing things that no reasonable person would every conceive of, and accidentally breaks into the computer. Formal research and development focuses on the known threats, but has trouble of imagining unknown threats.

Another reason informal research is successful is how the modern technology stack works. Whether it’s a mobile phone, a WiFi enabled teddy bear for the kids, a connected pacemaker jolting the grandparent’s heart, or an industrial control computer controlling manufacturing equipment, all modern products share a common base of code.

Somebody can be an expert in an individual piece of code used in all these products without understanding anything about these products.

I experience this effect myself. I regularly scan the entire Internet looking for a particular flaw. All I see is the flaw itself, exposed to the Internet, but not anything else about the system I’ve probed. Maybe it’s a robot. Maybe it’s a car. Maybe it’s somebody’s television. Maybe it’s any one of the billions of IoT (“Internet of Things”) devices attached to the Internet. I’m clueless about the products — but an expert about the flaw.

A company, even as big as Apple or Microsoft, cannot hire enough people to be experts in every piece of technology they use. Instead, they can offer bounties encouraging those who are experts in obscure bits of technology to come forward and examine their products.

This ad hoc nature is important when looking at the solution to the problem. Many think this can be formalized, such as with the requirement of contacting a company asking for permission to look at their product before doing any reverse-engineering.

This doesn’t work. A security researcher will buy a bunch of used products off eBay to test out a theory. They don’t know enough about the products or the original vendor to know who they should contact for permission. This would take more effort to resolve than the research itself.

It’s solely informal and ad hoc “research” that needs protection. It’s the same as with everything else that preaches openness and transparency. Imagine if we had freedom of the press, but only for journalists who first were licensed by the government. Imagine if it were freedom of religion, but only for churches officially designated by the government.

Those companies selling voting systems they promise as being “secure” will never give permission. It’s only through ad hoc and informal security research, hostile to the interests of those companies, that the public interest will be advanced.

The current exemptions have a number of “gotchas” that seem reasonable, but which create an unacceptable chilling effect.

For example, they allow informal security research “as long as no other laws are violated”. That sounds reasonable, but with so many laws and regulations, it’s usually possible to argue they violated some obscure and meaningless law in their research. It means a security researcher is now threatened by years in jail for violating a regulation that would’ve resulted in a $10 fine during the course of their research.

Exceptions to the DMCA need to be clear and unambiguous that finding security bugs is not a crime. If the researcher commits some other crime during research, then prosecute them for that crime, not for violating the DMCA.

The strongest opposition to a “security research exemption” in the DMCA is going to come from the copyright industry itself — those companies who depend upon copyright for their existence, such as movies, television, music, books, and so on.

The United States position in the world is driven by intellectual property. Hollywood is not simply the center of American film industry, but the world’s film industry. Congress has an enormous incentive to protect these industries. Industry organizations like the RIAA and MPAA have enormous influence on Congress.

Many of us in tech believe copyright is already too strong. They’ve made a mockery of the Constitution’s statement of copyrights being for a “limited time”, which now means works copyrighted decades before you were born will still be under copyright decades after you die. Section 512 takedown notices are widely abused to silence speech.

Yet the copyright-protected industries perceive themselves as too weak. Once a copyrighted work is post to the Internet for anybody to download, it because virtually impossible to remove (like removing pee from a pool). Takedown notices only remove content from the major websites, like YouTube. They do nothing to remove content from the “dark web”.

Thus, they jealously defend against any attempt that would weaken their position. This includes “security research exemptions”, which threatens “DRM” technologies that prevent copying.

One fear is of security researchers themselves, that in the process of doing legitimate research that they’ll find and disclose other secrets, such as the encryption keys that protect DVDs from being copied, that are built into every DVD player on the market. There is some truth to that, as security researchers have indeed publish some information that the industries didn’t want published, such as the DVD encryption algorithm.

The bigger fear is that evildoers trying to break DRM will be free to do so, claiming their activities are just “security research”. They would be free to openly collaborate with each other, because it’s simply research, while privately pirating content.

But these fears are overblown. Commercial piracy is already forbidden by other laws, and underground piracy happens regardless of the law.

This law has little impact on whether reverse-engineering happens so much as impact whether the fruits of research are published. And that’s the key point: we call it “security research”, but all that’s meaningful is “published security research”.

In other words, we are talking about a minor cost to copyright compared with a huge cost to cybersecurity. The cybersecurity of voting machines is a prime example: voting security is bad, and it’s not going to improve until we can publicly challenge it. But we can’t easily challenge voting security without being prosecuted under the DMCA.

Conclusion

The only credible encryption algorithms are public ones. The only cybersecurity we trust is cybersecurity that we can probe and test, where most details are publicly available. That such transparency is necessary to security has been recognized since the 1880s with Kerckhoff’s Principle. Yet, the naive still believe in coverups. As the election industry claimed in their brief: “Giving anonymous hackers a license to attack critical infrastructure would not serve the public interest”. Giving anonymous hackers ad hoc, informal access to probe critical infrastructure like voting machines not only serves the public interest, but is necessary to the public interest. As has already been proven, voting machines have cybersecurity weaknesses that they are covering up, which can only be revealed by anonymous hackers.

This research needs to be ad hoc and informal. Attempts at reforming the DMCA, or the Copyright Office’s attempt at exemptions, get modified into adding exemptions for formal research. This ends up having the same chilling effect on research while claiming to allow research.

Copyright, like other forms of intellectual property, is important, and it’s proper for government to protect it. Even radical anarchists in our industry want government to protect “copyleft”, the use of copyright to keep open-source code open.

But it’s not so important that it should allow abuse to silence security research. Transparency and ad hoc testing is critical to research, and is more and more often being silenced using copyright law.

Why Biden: Principle over Party

Post Syndicated from Robert Graham original https://blog.erratasec.com/2020/10/why-biden-principle-over-party.html

There exist many #NeverTrump Republicans who agree that while Trump would best achieve their Party’s policies, that he must nonetheless be opposed on Principle. The Principle at question isn’t about character flaws, such as being a liar, a misogynist, or a racist. The Principle isn’t about political policies, such as how to handle the coronavirus pandemic, or the policies Democrats want. Instead, the Principle is that he’s a populist autocrat who is eroding our liberal institutions (“liberal” as in the classic sense).

Countries don’t fail when there’s a leftward shift in government policies. Many prosperous, peaceful European countries are to the left of Biden. What makes prosperous countries fail is when civic institutions break down, when a party or dear leader starts ruling by decree, such as in the European countries of Russia or Hungary.

Our system of government is like football. While the teams (parties) compete vigorously against each other, they largely respect the rules of the game, both written and unwritten traditions. They respect each other — while doing their best to win (according to the rules), they nonetheless shake hands at the end of the match, and agree that their opponents are legitimate.

The rules of the sport we are playing is described in the Wikipedia page on “liberal democracy“.

Sport matches can be enjoyable even if you don’t understand the rules. The same is true of liberal democracy: there’s little civic education in the country so most don’t know the rules game. Most are unaware even that there are rules.

You see that in action with this concern over Trump conceding the election, his unwillingness to commit to a “peaceful transfer of power”. His supporters widely believed this is a made-up controversy, a “principle” created on the spot as just another way to criticize Trump.

But it’s not a new principle. A “peaceful transfer of power” is the #1 bedrock principle from which everything else derives. It’s the first way we measure whether a country is actually the “liberal democracy” that they claim. For example, the fact that Putin has been in power for 20 years makes us doubt that they are really the “liberal democracy” that they claim. The reason you haven’t heard of it, the reason it isn’t discussed much, is that it’s so unthinkable that a politician would reject it the way Trump has.

The historic importance of this principle can be seen when you go back and read the concession speeches of HillaryMcCainGore, and Bush Sr., and Carter, you see that all of them stressed the legitimacy of their opponent’s win, and a commitment to a peaceful transfer of power. (It goes back further than that, to the founding of our country, but I can’t link every speech). The following quote from Hillary’s concession to Trump demonstrates this principle:

But I still believe in America and I always will. And if you do, then we must accept this result and then look to the future. Donald Trump is going to be our president. We owe him an open mind and the chance to lead.

Our constitutional democracy enshrines the peaceful transfer of power and we don’t just respect that, we cherish it. It also enshrines other things; the rule of law, the principle that we are all equal in rights and dignity, freedom of worship and expression. We respect and cherish these values too and we must defend them.

If this were Trump’s only failure, then we could excuse it and work around it. As long as he defended all the other liberal institutions, then we could accept one aberration.

The problem is that he’s attacking every institution. He’s doing his best to act like a populist autocrat we see in non-democratic nations. Our commitment to liberal institutions is keeping him in check — but less and less well as time goes on. For example, when Jeff Sessions refused to politicize the DoJ, Trump replaced him with Barr, who notoriously has corrupted the DoJ to serve Trump’s political interests. I mean this only as yet another example — a complete enumeration of his long train of abuses and usurpations would take many more pages than I intend for this blogpost.

Four more years of Trump means four more years of erosion of our liberal democratic institutions.

The problem isn’t just what Trump can get away with, but the precedent he sets for his successor.

The strength of our liberal institutions to hold the opposing Party in check comes only from our defense of those institutions when our own Party is in power. When we cross the line, it means the opposing party will feel justified in likewise crossing the line when they get power.

We see that with the continual erosion of the Supreme Court over the last several decades. It’s easy to blame the other Party for this, but the reality is that both parties have been been going back and forth corrupting this institution. The Republicans refusal to confirm Garland and their eagerness to confirm Barrett is egregious, but justified by the Democrats use of the nuclear option when they were in power. When Biden gets power, he’s going to try to pack the court, which historically has been taught to school children as a breakdown of liberal democratic institutions, but which will be justified by the Republican’s bad behavior in eroding those institutions. We might be able to avert court packing if Biden gets into power now, but we won’t after four more years of Trump court appointments.

It’s not just the politicization of the Supreme Court, it’s the destruction of all our institutions. Somebody is going to have to stand for Principle over Party and put a stop to this. That is the commitment of the #NeverTrump. The Democrats are going to be bad when they get into power, but stopping them means putting our own house in order first.

This post makes it look like I’m trying to convince fellow Republicans why they should vote against Trump, and I suppose it is. However, my real purpose is to communicate with Democrats. My Twitter feed is full of leftists who oppose liberal democratic institutions even more than Trump. I want evidence to prove that I actually stand for Principle, and not just Party.