All posts by Robert Graham

California’s bad IoT law

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/09/californias-bad-iot-law.html

California has passed an IoT security bill, awaiting the governor’s signature/veto. It’s a typically bad bill based on a superficial understanding of cybersecurity/hacking that will do little improve security, while doing a lot to impose costs and harm innovation.


It’s based on the misconception of adding security features. It’s like dieting, where people insist you should eat more kale, which does little to address the problem you are pigging out on potato chips. The key to dieting is not eating more but eating less. The same is true of cybersecurity, where the point is not to add “security features” but to remove “insecure features”. For IoT devices, that means removing listening ports and cross-site/injection issues in web management. Adding features is typical “magic pill” or “silver bullet” thinking that we spend much of our time in infosec fighting against.

We don’t want arbitrary features like firewall and anti-virus added to these products. It’ll just increase the attack surface making things worse. The one possible exception to this is “patchability”: some IoT devices can’t be patched, and that is a problem. But even here, it’s complicated. Even if IoT devices are patchable in theory there is no guarantee vendors will supply such patches, or worse, that users will apply them. Users overwhelmingly forget about devices once they are installed. These devices aren’t like phones/laptops which notify users about patching.

You might think a good solution to this is automated patching, but only if you ignore history. Many rate “NotPetya” as the worst, most costly, cyberattack ever. That was launched by subverting an automated patch. Most IoT devices exist behind firewalls, and are thus very difficult to hack. Automated patching gets beyond firewalls; it makes it much more likely mass infections will result from hackers targeting the vendor. The Mirai worm infected fewer than 200,000 devices. A hack of a tiny IoT vendor can gain control of more devices than that in one fell swoop.

The bill does target one insecure feature that should be removed: hardcoded passwords. But they get the language wrong. A device doesn’t have a single password, but many things that may or may not be called passwords. A typical IoT device has one system for creating accounts on the web management interface, a wholly separate authentication system for services like Telnet (based on /etc/passwd), and yet a wholly separate system for things like debugging interfaces. Just because a device does the proscribed thing of using a unique or user generated password in the user interface doesn’t mean it doesn’t also have a bug in Telnet.

That was the problem with devices infected by Mirai. The description that these were hardcoded passwords is only a superficial understanding of the problem. The real problem was that there were different authentication systems in the web interface and in other services like Telnet. Most of the devices vulnerable to Mirai did the right thing on the web interfaces (meeting the language of this law) requiring the user to create new passwords before operating. They just did the wrong thing elsewhere.

People aren’t really paying attention to what happened with Mirai. They look at the 20 billion new IoT devices that are going to be connected to the Internet by 2020 and believe Mirai is just the tip of the iceberg. But it isn’t. The IPv4 Internet has only 4 billion addresses, which are pretty much already used up. This means those 20 billion won’t be exposed to the public Internet like Mirai devices, but hidden behind firewalls that translate addresses. Thus, rather than Mirai presaging the future, it represents the last gasp of the past that is unlikely to come again.

This law is backwards looking rather than forward looking. Forward looking, by far the most important thing that will protect IoT in the future is “isolation” mode on the WiFi access-point that prevents devices from talking to each other (or infecting each other). This prevents “cross site” attacks in the home. It prevents infected laptops/desktops (which are much more under threat than IoT) from spreading to IoT. But lawmakers don’t think in terms of what will lead to the most protection, they think in terms of who can be blamed. Blaming IoT devices for moral weakness of not doing “reasonable” things is satisfying, regardless if it’s effective.

The law makes the vague requirement that devices have “reasonable” and “appropriate” security features. It’s impossible for any company to know what these words mean, impossible to know if they are compliant with the law. Like other laws that use these terms, it’ll have be worked out in the courts. But security is not like other things. Rather than something static that can be worked out once, it’s always changing. This is especially true since the adversary isn’t something static like wear and tear on car parts, but dynamic: as defenders improve security, attackers change tactics, so what’s “reasonable” is constantly changing. Security struggles with hindsight bias, so what’s “reasonable” and “appropriate” seem more obvious after bad things occur rather than before. Finally, you are asking the lay public to judge reasonableness, so a jury can easily be convinced that “anti-virus” would be a reasonable addition to IoT devices despite experts believing it would be unreasonable and bad.

The intent is for the law to make some small static improvement, like making sure IoT products are patchable, after a brief period of litigation. The reality is that the issue is going to constantly be before the courts as attackers change tactics, causing enormous costs. It’s going to saddle IoT devices with encryption and anti-virus features that the public believe are reasonable but that make security worse.

Lastly, Mirai was only 200k devices that were primarily outside the United States. This law fails to address this threat because it only applies to California devices, not the devices purchased in Vietnam and Ukraine that, once they become infected, would flood California targets. If somehow the law influenced general improvement of the industry, you’d still be introducing unnecessary costs to 20 billion devices in an attempt to clean up 0.001% of those devices.

In summary, this law is based upon an obviously superficial understanding of the problem. It in no way addresses the real threats, but at the same time, introduces vast costs to consumers and innovation. Because of the changing technology with IPv4 vs. IPv6 and WiFi vs. 5G, such laws are unneeded: IoT of the future is inherently going to be much more secure than the Mirai-style security of the past.


Update: This tweet demonstrates the points I make above. It’s about how Tesla used an obviously unreasonable 40-bit key in its keyfobs.

It’s obviously unreasonable and they should’ve known about the weakness of 40-bit keys, but here’s the thing: every flaw looks this way in hindsight. There never has been a complex product ever created that didn’t have similarly obvious flaws.

On the other hand, what Tesla does have better than any other car maker is the proper programs whereby they can be notified of such flaws in order to fix them in a timely manner. Better yet, they offer bug bounties. This isn’t a “security feature” in the product, but yet is absolutely the #1 most important thing that a company has, more so than any security feature. What we are seeing with the IoT marketplace in general is that companies lack such notification/disclosure programs: companies can be compliant with the California law was still lacking such programs.
Finally, Tesla cars are “Internet connected devices” according to the law, so they can be sued under that law for this flaw, even though it represents no threat the law was intended to handle.
Again, the law wholly misses the point. A law demanding IoT companies have disclosure program would actually be far more effective at improving security than this current law, while not imposing the punitive costs the current law does.

Debunking Trump’s claim of Google’s SOTU bias

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/08/debunking-trumps-claim-of-googles-sotu.html

Today, Trump posted this video proving Google promoted all of Obama “State of the Union” (SotU) speeches but none of his own. In this post, I debunk this claim. The short answer is this: it’s not Google’s fault but Trump’s for not having a sophisticated social media team.

The evidence still exists at the Internet Archive (aka. “Wayback Machine”) that archives copies of websites. That was probably how that Trump video was created, by using that website. We can indeed see that for Obama’s SotU speeches, Google promoted them, such as this example of his January 12, 2016 speech:
And indeed, if we check for Trump’s January 30, 2018 speech, there’s no such promotion on Google’s homepage:
But wait a minute, Google claims they did promote it, and there’s even a screenshot on Reddit proving Google is telling the truth. Doesn’t this disprove Trump?
No, it actually doesn’t, at least not yet. It’s comparing two different things. In the Obama example, Google promoted hours ahead of time that there was an upcoming event. In the Trump example, they didn’t do that. Only once the event went live did they mention it.
I failed to notice this in my examples above because the Wayback Machine uses GMT timestamps. At 9pm EST when Trump gave his speech, it was 2am the next day in GMT. So picking the Wayback page from January 31st we do indeed see the promotion of the live event.
Thus, Trump still seems to have a point: Google promoted Obama’s speech better. They promoted his speeches hours ahead of time, but Trump’s only after they went live.
But hold on a moment, there’s another layer to this whole thing. Let’s look at those YouTube URLs. For the Obama speech, we have this URL:
For the Trump speech, we have this URL:
I show you the complete URLs to show you the difference. The first video is from the White House itself, whereas the second isn’t (it’s from the NBC livestream).
So here’s the thing, and I can’t stress this enough Google can’t promote a link that doesn’t exist. They can’t say “Click Here” if there is no “here” there. Somebody has to create a link ahead of time. And that “somebody” isn’t YouTube: they don’t have cameras to create videos, they simply publish videos created by others.
So what happened here is simply that Obama had a savvy media that knew how to create YouTube live events, and make sure they get promoted, while Trump doesn’t have such a team. Trump relied upon the media (which he hates so much) to show the video live, making no effort himself to do so. We can see this for ourselves: while the above link clearly shows the Obama White House having created his live video, the current White House channel has no such video for Trump.
So clearly the fault is Trump’s, not Google’s.

But wait, there’s more to the saga. After Trump’s speech, Google promoted the Democrat response:

Casually looking  back through the Obama years, I don’t see any equivalent Republican response. Is this evidence of bias?

Maybe. Or again, maybe it’s still the Democrats are more media savvy than the Republicans. Indeed, what came after Obama’s speech on YouTube in some years was a question-and-answer session with Obama himself, which of course is vastly more desirable for YouTube (personal interaction!!) and is going to push any competing item into obscurity.

If Trump wants Google’s attention next January, it’s quite clear what he has to do. First, set up a live event the day before so that Google can link to it. Second, setup a second post-speech interactive question event that will, of course, smother the heck out of any Democrat response — and probably crash YouTube in the process.

Buzzfeed quotes Google PR saying:

On January 30 2018, we highlighted the livestream of President Trump’s State of the Union on the google.com homepage. We have historically not promoted the first address to Congress by a new President, which is technically not a State of the Union address. As a result, we didn’t include a promotion on google.com for this address in either 2009 or 2017.

This is also bunk. It ignores the difference between promoting upcoming and live events. I can’t see that they promoted any of Bush’s speeches (like in 2008) or even Obama’s first SotU in 2010, though it did promote a question/answer session with Obama after the 2010 speech. Thus, the 2017 trend has only a single data point.

My explanation is better: Obama had a media savvy team that reached out to them, whereas Trump didn’t. But you see the problem for a PR flack: while they know they have no corporate policy to be biased against Trump, at the same time, they don’t necessarily have an explanation, either. They can point to data, such as the live promotion page, but they can’t necessarily explain why. An explanation like mine is harder for them to reach.

Provisioning a headless Raspberry Pi

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/08/provisioning-headless-raspberry-pi.html

The typical way of installing a fresh Raspberry Pi is to attach power, keyboard, mouse, and an HDMI monitor. This is a pain, especially for the diminutive RPi Zero. This blogpost describes a number of options for doing headless setup. There are several options for this, including Ethernet, Ethernet gadget, WiFi, and serial connection. These examples use a Macbook as an example, maybe I’ll get around to a blogpost describing this from Windows.

Burning micro SD card

We are going to edit the SD card before booting, so for completeness, I thought I’d describe the process of burning an SD card.

We are going to download the latest “raspbian” operating system. I download the “lite” version because I’m not using the desktop features. It comes as a compressed .zip file which we need to extract into an .img file. Just double-click on the .zip on Windows or Mac.

The next step is to burn the image to an SD card. On Windows I use Win32DiskImager. On Mac I use the following command-line steps:

$ sudo -s
# mount
# diskutil unmount /dev/disk2s1
# dd bs=1m if=~/Downloads/2018-06-27-raspbian-stretch-lite.img of=/dev/disk2 conv=sync

First, I need a root prompt. I then use the mount command to find out where the micro SD card is mounted in the file system. It’s usually /dev/disk2s1, but could be disk3 or disk4 depending upon other things that may already be mounted on my Mac, such as USB drives or dmg files. It’s important to know the correct drive because the dd utility is unforgiving of mistakes and can wipe out your entire drive. For gosh’s sake, don’t use disk1!!!! Remember dd stands for danger-danger (well, many claim it stands for disk-dump, but seriously, it’s dangerous).

The next step is to unmount the drive. Instead of the Unix umount utility use the diskutil unmount macOS tool.

Now we use good ol’ dd to copy the image over. The above example is my recently download raspbian image that’s two months old. When you do this, it’ll be a newer version with a different file name, so look in your ~/Downloads folder for the correct name.

This takes a while to write to the SD card. You can type [ctrl-T] to see progress if you want.

When we are done writing, don’t eject the card. We are going to edit the contents as described below before we stick it into our Raspberry Pi. After running dd, it’s going to become automatically mounted on your Mac, on mine it comes up as /Volumes/boot. When I say “root directory of the SD card” in the instructions below, I mean that directory.

Troubleshooting: If you get the “Resource busy” error when running dd, it means you didn’t unmount the drive. Go back and run diskutil unmount /dev/disk2s1 (or equivalent for whatever mount tell you which drive the SD card is using).

You can use the “raw” disk instead of normal disk, such as /dev/rdisk2. I don’t know what the tradeoffs are.

Ethernet

The RPi B comes with Ethernet built-in. You simply need to hook up the Ethernet cable to your network to automatically get an IP address. Or, you can directly connect the Ethernet to your laptop — that that’ll require some additional steps.
For a RPi Zero, you can attach a USB Ethernet adapter via an OTG converter to accomplish the same goal. However, in the next section, we’ll describing using a OTG gadget instead, which is better.
We want to use Ethernet to ssh into the device, but there’s a problem: the ssh service is not enabled by default in Raspbian. To enable it, just create a file ssh (or ssh.txt) in the root directory of the SD card. On my Macbook, it looks like:
$ touch /Volumes/boot/ssh
Eject the SD card, stick it into your Raspberry Pi, and boot it. After the device has booted, you’ll need to discover its IP address. On the local network from your Macbook, with the “Bonjour” service, you can just use the hostname “raspberrypi.local” (you can install Bonjour on Windows with iTunes, or the avahi service on Linux). Or, you can sniff the network with tcpdump. Or, you can scan for port 22 on the network with nmap or masscan. Or, you can look on your router’s DHCP status page to see what was assigned.
When there is a direct Ethernet-to-Ethernet connection on your laptop, the RPi won’t get an IP address because there is not DHCP service running on your laptop. In that case, the RPi will have an address in the range 169.254.x.x, or a link-local IPv6 address. You can discover which one via sniffing, or again, via Bonjour using raspberrypi.local.
Or, you can turn on “connection sharing” in Windows or macOS. This sets up your laptop to NAT the Ethernet out through your laptop’s other network connection (such as WiFi). This also provides DHCP to the device. On my macBook, it assigns the RPi an address like 192.168.2.2 or 192.168.2.3.
System preferences -> Sharing
Allowing Ethernet devices to share WiFi connection to Internet
In the above example, the RPi is attached via my Thunderbolt Ethernet cable. I could also have used a USB Ethernet, or RNDIS Ethernet (described below).
The default login for Raspbian is username pi and password raspberry. To ssh to the device, use a command line like:
or
Some troubleshooting tips. If you get an error “connection refused”, that means the remote SSH service isn’t running. It has to generate a new, random, SSH key the first time it runs, so startup can take a while. I just waited another minute and tried again, and everything worked. If you get an error “connection closed“, then it borked generating a key on the first startup. The service is running, and allowing the connection, then closing it because it has no key to use. There’s no hope for things at this point other than reflashing the SD card and starting over from scratch, or logging in some other way and fixing the SSH installation manually. I had this problem happen once, I don’t know why, and ended up just starting over from scratch.

RPi Zero OTG Ether Gadget

The Raspberry PI Zero (not the other models) supports OTG (On-The-Go) USB. That means it can be something on either end of a USB cable, either a host or a device. Among the devices it can emulate is an Ethernet adapter, thus allowing a USB cable to act as a virtual Ethernet connection. This is useful because the same USB cable can also power the RPi Zero. Just be sure to plug the cable into the port labeled “USB” instead of “PWR IN”.
I had to mess with these instructions twice. I haven’t troubleshooted why, I suspect that things failed on the first time around setting up the RNDIS drivers and Internet sharing. Once I got those configured correctly to automatically work, I reflashed the SD card and started again from scratch, and things worked slick.
As described above for Ethernet, after flashing the Raspbian image to the SD card, do “touch ssh” in its root directory to tell it to enable the SSH service on bootup.
Also in that root directory you’ll find a file config.txt. Edit that file and add the line “dtoverlay=dwc2” to the bottom.
The dwc2 is a driver for the OTG port that auto-detects if the port should be in host mode (where you attach devices to the RPi like flash drives), or device mode (such when emulating Ethernet, serial ports, and so forth).
Also in the root directory you’ll find cmdline.txt. Edit that file. It has only one very long line of text (that’ll wrap terminal). Edit that line. Move the cursor to after nowait and add the text “modules-load=dwc2,g_ether“.
These are the Linux command-line boot parameters. This is telling Linux to load the dwc2 driver, and configure that driver for emulating an Ethernet adapter.
Now cleanly eject the SD card, stick it in the RPi zero, and connect the USB cable. Remember to plug into the USB port on the RPi Zero not the PWR IN port.
On macOS, go to the Network System Preferences. Wait a couple minutes for the RPi Zero to boot and you should see an “RNDIS” Ethernet device appear. I’ve given mine a manual IP address, though I don’t think it matters, because I’m going to use “Internet sharing” to share the connection anyway.
RPi0 should now appear as RNDIS device
By the way, “RNDIS” is the name Microsoft gave this virtual Ethernet adapter, based on the NDIS name for Ethernet drivers Microsoft first created in the 1980s. It’s the name we use on macOS, Linux, BSD, Android, etc.
I struggled getting the proper IP address on this thing and ended up using Internet sharing, as described above, for this. The only change was to share the RNDIS Ethernet instead of Thunderbolt Ethernet.
Use NAT/DHCP to allow RPi0 to share my laptop’s WiFi
As described above, now do “ssh [email protected]” or “ssh [email protected]” (IP address as appropriate), with password “raspberry“.
As I mentioned above, I had to do this twice to get it to work the first time, I suspect that configuring macOS for the first time screwed things up.
The Ethernet interface will come up with the name usb0.

WiFi

For the devices supporting WiFi, instead of using Ethernet we can use WiFi.

To start with, we again create the ssh file to tell it to start the service:

$ touch /Volumes/boot/ssh

Now we to create a file in the SD root directory called “wpa_supplicant.conf” with contents that look like the following:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=US

network={
    ssid=”YOURSSID”
    psk=”YOURPASSWORD”
    scan_ssid=1
}

You need to change the SSID and password to conform to your WiFi network, as I do in the screenshot below (this is actually the local bar’s network):

Safely eject the card, insert into RPi, and power it up. In the following screenshot, I’m using a battery to power the RPi, to show there is no other connection (Ethernet, USB, etc.).

I then log in from my laptop that is on the same WiFi. Again, you can use either raspberrypi.local (on a laptop using Apple’s Bonjour service), or use the raw IP address, as in the following example:

This one time it didn’t work. It had everything configured right, but for some reason it didn’t find the WiFi network. Restarting the device fixed the problem. I’m not sure what this happened.

Enabling serial cable

The old-school way t.

The first step is to go to Adafruit and buy a serial cable for $10, this device for $7, or for $6 from Amazon, and install the drivers as documented here. The cable I got requires the “SiLabs CP210X” drivers.

The next step is to edit config.txt on the SD card and add the line at the end “enable_uart=1″.

Now we are ready to cleanly eject the SD card and stick in the Raspberry Pi.

First, let’s hook the serial cable to the Raspberry Pi. NOTE: don’t plug in the USB end into the computer yet!!! The guide at Adafruit shows which colored wires to connect to which GPIO pins.

From Adafruit

Basically, the order is (red) [blank] [black] [white] [green] from the outer edge. It’s the same configuration for Pi Zeroes, but you may get yours without pins. You either have to solder on some jumper wires [*] or use alligator clips.

You have two options on how to power the board. You can either connect the red wire to the first pin (as I do in the picture below) or you can connect power as normal, such as to a second USB port on your laptop. I chose to try the serial cable to power my Raspberry Pi 3 Model B+ from the serial port. I got occasional messages complaining about “undervoltage”, but everything worked without corrupting the SD card (SD card corruption it often what happens with power problems).

Once you’ve got the serial cable attached to the Pi, then plug it into the USB port on the laptop. This should start booting up.

On Windows you can use Putty, and on Linux you can use /dev/ttyUSB0, but on the Macbook we are going to use an outgoing serial device. The first thing is to find the device, such as doing “ls /dev/cu.*” to see which devices are available. On my Macbook, I get “/dev/cu.SLAB_USBtoUART” as the one to use, plus some other possibilities (from Bluetooth and my iPhone) that I’m not interested in:

/dev/cu.Bluetooth-Incoming-Port
/dev/cu.SLAB_USBtoUART
/dev/cu.iPhone-WirelessiAPv2

The command to run to connect to the Pi is:

$ sudo screen /dev/cu.SLAB_USBtoUART 115200

You’ll have to hit the return key a couple times for it to know you’ve connected, at which point it’ll give you a command prompt.

(I’ve renamed the system from ‘raspberrypi’ in the screenshot to ‘pippen’).

Note that with some jumper wires you can simply connect the UART from one Raspberry Pi to another.

Conclusion

So here you have the various ways:
  • Raspberry Pi 3 Model B/B+ – Ethernet
  • Raspberry Pi 3 Model B/B+ – WiFi
  • Raspberry Pi 3 Model B/B+ – Serial
  • Raspberry Pi Zero/Zero W – USB Ethernet dongle
  • Raspberry Pi Zero/Zero W – OTG Ethernet gadget
  • Raspberry Pi Zero/Zero W – Serial
  • Raspberry Pi Zero W – WiFi
It seems we should also be able to get Bluetooth serial working, but there’s no support for that yet in the Raspbian build. A serial OTG gadget (that works like the Ethernet gadget) should also work in theory, but apparently it needs an extra configuration step after bootup, so can’t be configured completely headless.

DeGrasse Tyson: Make Truth Great Again

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/08/degrasse-tyson-make-truth-great-again.html

Neil deGrasse Tyson tweets the following:

When people make comparisons with Orwell’s “Ministry of Truth”, he obtusely persists:
Given that Orwellian dystopias were the theme of this summer’s DEF CON hacker conference, let’s explore what’s wrong with this idea.

Truth vs. “Truth”

I work in a corrupted industry, variously known as the “infosec” community or “cybersecurity” industry. It’s a great example of how truth is corrupted into “Truth”.
At a recent government policy meeting, I pointed out how vendors often downplay the risk of bugs (vulnerabilities that can be exploited by hackers). When vendors are notified of these bugs and release a patch to fix them, they often give a risk rating. These ratings are often too low, in order to protect the corporate reputation. The representative from Oracle claimed that they didn’t do that, and that indeed, they’ll often overestimate the risk. Other vendors chimed in, also claiming they rated the risk higher than it really was.
In a neutral world, deliberately overestimating the risk would be the same falsehood as deliberately underestimating it. But we live in a non-neutral world, where only one side is a lie, the middle is truth, and the other side is “Truth”. Lying in the name of the “Truth” is somehow acceptable.
Moreover, Oracle is famous for having downplayed the risk of significant bugs in the past, and is well-known in the industry as being the least trustworthy vendor as far as security of their products is concerned. Much of their policy efforts in Washington D.C. are focused on preventing their dirty laundry from being exposed. They aren’t simply another vendor promoting “Truth”, but a deliberately exploiting “Truth” to corrupt ends.
That we should exaggerate the risks of cybersecurity, deliberately lie to people for their own good, is the uncontroversial consensus of our infosec/cybersec community. Most do it, few think this is wrong. Security is a moral imperative that justifies “Truth”.

The National Academy of Scientists

So are we getting the truth or “Truth” from organizations like the National Academy of Scientists?
The question here isn’t global warming. That mankind’s carbon emissions warms the climate is truth. We have a good understanding of how greenhouse gases work, as well as many measures of the climate showing that warming is occurring. The Arctic is steadily losing ice each summer.
Instead, the question is “Global Warming”, the claims made by politicians on the subject. Do politicians on the left fairly represent the truth, or are they the “Truth”?
Which side is the National Academy of Sciences on? Are they committed to the truth, or (like the infosec/cybersec community) are they pursuing “Truth”? Is global warming a moral imperative that justifies playing loose with the facts?
Googling “national academy of sciences climate change” quickly leads to this document: “Climate Change: Evidence and Causes“. Let’s skip past the basics and go directly to “hurricanes”. It’s a favorite topic among politicians, where every hurricane season they blame the latest damage on climate change. Is such blame warranted?
The answer is “no”. There is not sufficient evidence to conclude hurricanes have gotten worse. There is good reason to believe they might get worse, after all, warmer oceans lead to more energy, but as far as we can tell, it hasn’t happened yet. Moreover, when it does happen, the best theories point to hurricanes only becoming slightly worse. It’s certainly worthy to add to future estimates of the costs of climate change, but it’s not going to be catastrophic.
The above scientific document, though, punts on this answer, as shown in the below screenshot:
The document is clearly a political one. It’s content is intended to refute any scientific claims made by Republicans, but not to offend Democrats. It is on the side of “Truth” not truth. If Obama blames hurricane damage on the oil companies, the National Academy of Sciences is going to politely dance around the issue.
Whenever I point out in conversation that the science is against somebody’s claim about hurricanes, people ask me to cite my sources. This is exactly the source I would cite, but it’s difficult. It’s non-answer on hurricanes should be sufficient, after all, science. But since they prevaricate without being explicit on the issue, few accept this source.

Why this matters

Last year in the state of Washington, the Republicans put a carbon tax bill on the ballot in order to combat climate change. The Democrats shot it down.
The reason (for both actions) is that the tax was revenue neutral, meaning the added revenue from the carbon tax was offset by reduction in other taxes (namely, the sales tax). This matches the Republican ideology: they have no particular dispute with climate change as such, they just oppose expansion of government. If you can address climate change without increasing taxes or regulation, they have no principled reason to oppose it. Thus, revenue neutral carbon taxes are something Republicans will easily agree with. Even if they don’t believe in global warming, they have no real opposition to replacing one tax with another.
Conversely, the Democrats don’t care about solving climate change. Instead, their goal is to expand government, increasing taxes and regulation. They will reject any proposal to address climate change that doesn’t match their ideological goals.
This description of what happened is extreme, of course. Things are invariably more nuanced than this. But there’s still a kernel of truth here. This idea that one side is being ideological (denying climate change) and other side scientific is false. Both sides are equally ideological/scientific, just in different directions.
It’s therefore not just Republican ideology here that is the sticking point, but also Democrat. As long as Democrats believe they don’t have to compromise, because the “Truth” is on their side, they won’t. Instead of agreeing on revenue neutral carbon taxes, they’ll insist on that extra revenue subsidizing photovoltaic panels (or some such that increases total government taxes/spending). The National Academy of Sciences defending “Truth” is not helping the situation.

Conclusion

I believe in global-warming/climate-change, that mankind’s carbon emissions are increasing temperatures and that we must do something about this. I drive an electric car, but more importantly, use carbon offsets in order to be completely carbon neutral. I want a large carbon tax, albeit one that is revenue neutral. This blogpost shouldn’t be interpreted in any way as “denying climate change”.
Instead, the point is about “Truth”. I see the facile corruption of “Truth” in my own industry. It’s incredibly Orwellian. I’m disappointed how those like Neil deGrasse Tyson haven’t learned the lessons of history and 1984 about “Truth”.

That XKCD on voting machine software is wrong

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/08/that-xkcd-on-voting-machine-software-is.html

The latest XKCD comic on voting machine software is wrong, profoundly so. It’s the sort of thing that appeals to our prejudices, but mistakes the details.

Accidents vs. attack

The biggest flaw is that the comic confuses accidents vs. intentional attack. Airplanes and elevators are designed to avoid accidental failures. If that’s the measure, then voting machine software is fine and perfectly trustworthy. Such machines are no more likely to accidentally record a wrong vote than the paper voting systems they replaced — indeed less likely. The reason we have electronic voting machines in the first place was due to the “hanging chad” problem in the Bush v. Gore election of the year 2000. After that election, a wave of new, software-based, voting machines replaced the older inaccurate paper machines.
The question is whether software voting machines can be attacked. Well, if that’s the measure, then airplanes aren’t safe at all. Security against human attack consists of the entire infrastructure outside the plane, such as TSA forcing us to take off our shoes, to trade restrictions to prevent the proliferation of Stinger missiles.
Confusing the two, accidents vs. attack, is used here because it makes the reader feel superior. We get to mock and feel superior to those stupid software engineers for not living up to what’s essentially a fictional standard of reliability.
To repeat: software is better than the mechanical machines they replaced, which is why there are so many software-based machines in the United States. The issue isn’t normal accuracy, but their robustness against a different standard, against attack — a standard which airplanes and elevators suck at.

The problems are as much hardware as software

Last year at the DEF CON hacking conference they had an “Election Hacking Village” where they hacked a number of electronic voting machines. Most of those “hacks” were against the hardware, such as soldering on a JTAG device or accessing USB ports. Other errors have been voting machines being sold on eBay whose data wasn’t wiped, allowing voter records to be recovered.
What we want to see is hardware designed more like an iPhone, where the FBI can’t decrypt a phone even when they really really want to. This requires special chips, such as secure enclaves, signed boot loaders, and so on. Only once we get the hardware right can we complain about the software being deficient.
To be fair, software problems were also found at DEF CON, like an exploit over WiFi. Though, a lot of problems are questionable whether the fault lies in the software design or the hardware design, fixable in either one. The situation is better described as the entire design being flawed, from the “requirements”,  to the high-level system “architecture”, and lastly to the actual “software” code.

It’s lack of accountability/fail-safes

We imagine the threat is that votes can be changed in the voting machine, but it’s more profound than that. The problem is that votes can be changed invisibly. The first change experts want to see is adding a paper trail, rather than fixing bugs.
Consider “recounts”. With many of today’s electronic voting machines, this is meaningless, with nothing to recount. The machine produces a number, and we have nothing else to test against whether that number is correct or false. You can press a button and do an instant recount, but it won’t tell you any other answer than the original one.
A paper trail changes this. After the software voting machine records the votes, it prints them to paper for the voter to check. This retains the features of better user-interface design than the horrible punch-hole machines of yore, and retains the feature of quick and cheap vote tabulation, so we know the results of the election quickly. But, if there’s an irregularity, there exists an independent record that we can go back, with some labor, and verify.
It’s like fail-systems in industrial systems, where we are less concerned about whether the normal systems have an error, but much more concerned about whether the fail-safe system works. It’s like how famously Otis is not the inventor of elevators, but the inventor of elevator brakes that will safely stop the car from plummeting to your death if the cable snaps.
What’s lacking in election machines therefore is not good or bad software engineering, but the failure of anybody to create fail-safes in the design, fail-safes that will work regardless of how the software works.

It’s not just voting machines

It’s actually really hard for the Russians to hack voting machines, as they have to be attacked them on a one-by-one basis. It’s hard for a mass hack that affects them all.
It’s much easier to target the back-end systems that tabulate the votes, which are more often normal computers connected to the Internet.
In addition, there are other ways that hackers can target elections. For example, the political divide in America between rural (Republican) and urban (Democrat) voters is well known. An attack against traffic lights, causing traffic jams, is enough to swing the vote slightly in the direction of rural voters. That makes a difference in places like last night’s by-election in Ohio where a House candidate won by a mere 1,700 votes.
Voting machines are important, but there’s way to much focus on them as if they are the only target to worry about.

Conclusion

The humor of this comic rests on smug superiority. But it’s wrong. It’s applying a standard (preventing accidents) against a completely different problem (stopping attackers) — software voting machines are actually better against accidents than the paper machines they replace. It’s ignoring the problems, which are often more system and hardware design than software. It ignores the solution, which isn’t to fix software bugs, but to provide an independent, auditable paper trail.

What the Caesars (@DefCon) WiFi situation looks like

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/08/what-caesars-defcon-wifi-situation.html

So I took a survey of WiFi at Caesar’s Palace and thought I’d write up some results.

When we go to DEF CON in Vegas, hundreds of us bring our WiFi tools to look at the world. Actually, no special hardware is necessary, as modern laptops/phones have WiFi built-in, while the operating system (Windows, macOS, Linux) enables “monitor mode”. Software is widely available and free. We still love our specialized WiFi dongles and directional antennas, but they aren’t really needed anymore.

It’s also legal, as long as you are just grabbing header information and broadcasts. Which is about all that’s useful anymore as encryption has become the norm — we can pretty much only see what we are allowed to see. The days of grabbing somebody’s session-cookie and hijacking their web email are long gone (though the was a fun period). There are still a few targets around if you want to WiFi hack, but most are gone.

So naturally I wanted to do a survey of what Caesar’s Palace has for WiFi during the DEF CON hacker conference located there.

Here is a list of access-points (on channel 1 only) sorted by popularity, the number of stations using them. These have mind-blowing high numbers in the ~3000 range for “CAESARS”. I think something is wrong with the data.

I click on the first one to drill down, and I find a source of the problem. I’m seeing only “Data Out” packets from these devices, not “Data In”.

These are almost entirely ARP packets from devices, associated with other access-points, not actually associated with this access-point. The hotel has bridged (via Ethernet) all the access-points together. We can see this in the raw ARP packets, such as the one shown below:

WiFi packets have three MAC addresses, the source and destination (as expected) and also the address of the access-point involved. The access point is the actual transmitter, but it’s bridging the packet from some other location on the local Ethernet network.

Apparently, CAESARS dumps all the guests into the address range 10.10.x.x, all going out through the router 10.10.0.1. We can see this from the ARP traffic, as everyone seems to be ARPing that router.

I’m probably seeing all the devices on the CAESARS WiFi. In other words, if I sit next to another access-point, such as one on a different channel, I’m likely to see the same list. Each broadcast appears to be transmitted by all access-points, carried via the backend bridged Ethernet network.

The reason Caesars does it this way is so that you can roam, so that you can call somebody on  FaceTime and walk to the end of the Forum shops on back and not drop the phone call. At least in theory, I haven’t tested it to see if things actually work out this way. It’s the way massive complexes like Caesars ought to work, but which many fail at doing well. Like most “scale” problems, it sounds easy and straightforward until you encounter all the gotchas along the way.

Apple’s market share for these devices is huge, with roughly 2/3rds of all devices being Apple. Samsung has 10% Apple’s share. Here’s a list of vendor IDs (the first 3 bytes of the MAC address) by popularity, that I’m seeing on that one access-point:

  •     2327 Apple
  •       257 Samsung
  •       166 Intel
  •       132 Murata
  •         55 Huawei
  •         29 LG
  •         27 HTC-phone
  •         23 Motorola
  •         21 Foxconn
  •         20 Microsoft
  •         17 Amazon
  •         16 Lite-On
  •         13 OnePlus
  •         12 Rivet Networks (Killer)
  •         11 (random)
  •         10 Sony Mobile
  •         10 Microsoft
  •           8 AsusTek
  •           7 Xiaomi
  •           7 Nintendo  

Apparently, 17 people can’t bear to part with their Amazon Echo/Alexa devices during their trip to Vegas and brought the devices with them. Or maybe those are Kindle devices.

Remember that these are found by tracking the vendor ID from the hardware MAC addresses built into every phone/laptop/device. Historically, we could also track these MAC addresses via “probe” WiFi broadcasts from devices looking for access-points. As I’ve blogged before, modern iPhones and Androids randomize these addresses so we can no longer track the phones when they are just wandering around unconnected. Only once they connect do they use their real MAC addresses.

In the above sample, I’ve found ~1300 probers, ~90% of whose MAC addresses are randomized. As you can see, because of the way Caesars sets up their network, I can track MAC addresses better because of ARP broadcasts than I can with tracking WiFi probe broadcasts.

While mobile devices are the biggest source of MAC addresses, they also identify the fixed infrastructure. For example, some of the suites in Caesars have devices with a “Creston” MAC address. Somebody is releasing an exploit at BlackHat for Creston devices. There’s a patch available, but chances are good that hackers will start hacking these devices before Caesars gets around to patching them.

WPA-3 is promising to get rid of the open WiFi hotspots like CAESARS, but doing “optimistic encryption” by default. This is better, preventing me from even seeing the contents of ARP packets passively. However, as I understand the standard, it’ll still allow me to collect the MAC addresses passively, as in this example.

Conclusion

This post doesn’t contain any big hack. It’s fascinating how big WiFi has become ,as everyone seems to be walking around with a WiFi device, and most are connecting to the hotel WiFi. Yet, with ubiquitous SSL and WPA encryption, there’s much less opportunity for mischief, even though there’s a lot more to see.

The biggest takeaway is that even from a single point on the big network on the CAESARS compound, I can get a record of some identifier that can in ideal circumstances be traced back to you. In theory, I could sit an airport, or drive around your neighborhood, an match up those records with these records. However, because of address randomization in probes, I can only do this is you’ve actually connected to the networks.

Finally, for me, the most interesting bit is to appreciate how the huge CAESARS networks actually works to drop everyone on the same WiFi connection.

Some changes in how libpcap works you should know

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/07/some-changes-in-how-libpcap-works-you.html

I thought I’d document the solution to this problem I had.

The API libpcap is the standard cross-platform way of sniffing packets off the network. It works on Windows (winpcap), macOS, and all the Unixes. It’s better than simply opening a “raw socket” on Unix platforms because it takes advantage of higher performance capabilities of the system, including specialized sniffing hardware.


Traditionally, you’d open an adapter with pcap_open(), whose function parameters set options like snap length, promiscuous mode, and timeouts.

However, in newer versions of the API, what you should do instead is call pcap_create(), then set the options individually with calls to functions like pcap_set_timeout(), then once you are ready to start capturing, call pcap_activate().

I mention this in relation to “TPACKET” and pcap_set_immediate_mode().

Over the years, Linux has been adding a “ring buffer” mode to packet capture. This is a trick where a packet buffer is memory mapped between user-space and kernel-space. It allows a packet-sniffer to pull packets out of the driver without the overhead of extra copies or system calls that cause a user-kernel space transition. This has gone through several generations.

One of the latest generations causes the pcap_next() function to wait forever for a packet. This happens a lot on virtual machines where there is no background traffic on the network.

This looks like a bug, but maybe it isn’t.  It’s unclear what the “timeout” parameter actually means. I’ve been hunting down the documentation, and curiously, it’s not really described anywhere. For an ancient, popular APIs, libpcap is almost entirely undocumented as to what it precisely does. I’ve tried reading some of the code, but I’m not sure I’ve come to any understanding.

In any case, the way to resolve this is to call the function pcap_set_immediate_mode(). This causes libpccap to backoff and use an older version of TPACKET such that it’ll work as expected, that even on silent networks the pcap_next() function will timeout and return.

I mention this because I fixed this bug in my code. When running inside a VM, my program would never exit. I changed from pcap_open_live() to the pcap_create()/pcap_activate() method instead, adding the setting of “immediate mode”, and now things work. Performance seems roughly the same as far as I can tell.

I’m still not certain what’s going on here, and there are even newer proposed zero-copy/ring-buffer modes being added to the Linux kernel, so this can change in the future. But in any case, I thought I’d document this in a blogpost in order to help out others who might be encountering the same problem.

Your IoT security concerns are stupid

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/07/your-iot-security-concerns-are-stupid.html

Lots of government people are focused on IoT security, such as this bill or this recent effort. They are usually wrong. It’s a typical cybersecurity policy effort which knows the answer without paying attention to the question. Government efforts focus on vulns and patching, ignoring more important issues.

Patching has little to do with IoT security. For one thing, consumers will not patch vulns, because unlike your phone/laptop computer which is all “in your face”, IoT devices, once installed, are quickly forgotten. For another thing, the average lifespan of a device on your network is at least twice the duration of support from the vendor making patches available.
Naive solutions to the manual patching problem, like forcing autoupdates from vendors, increase rather than decrease the danger. Manual patches that don’t get applied cause a small, but manageable constant hacking problem. Automatic patching causes rarer, but more catastrophic events when hackers hack the vendor and push out a bad patch. People are afraid of Mirai, a comparatively minor event that led to a quick cleansing of vulnerable devices from the Internet. They should be more afraid of notPetya, the most catastrophic event yet on the Internet that was launched by subverting an automated patch of accounting software.
Vulns aren’t even the problem. Mirai didn’t happen because of accidental bugs, but because of conscious design decisions. Security cameras have unique requirements of being exposed to the Internet and needing a remote factory reset, leading to the worm. While notPetya did exploit a Microsoft vuln, it’s primary vector of spreading (after the subverted update) was via misconfigured Windows networking, not that vuln. In other words, while Mirai and notPetya are the most important events people cite supporting their vuln/patching policy, neither was really about vuln/patching.
Such technical analysis of events like Mirai and notPetya are ignored. Policymakers are only cherrypicking the superficial conclusions supporting their goals. They assiduously ignore in-depth analysis of such things because it inevitably fails to support their positions, or directly contradicts them.
IoT security is going to be solved regardless of what government does. All this policy talk is premised on things being static unless government takes action. This is wrong. Government is still waffling on its response to Mirai, but the market quickly adapted. Those off-brand, poorly engineered security cameras you buy for $19 from Amazon.com shipped directly from Shenzen now look very different, having less Internet exposure, than the ones used in Mirai. Major Internet sites like Twitter now use multiple DNS providers so that a DDoS attack on one won’t take down their services.
In addition, technology is fundamentally changing. Mirai attacked IPv4 addresses outside the firewall. The 100-billion IoT devices going on the network in the next decade will not work this way, cannot work this way, because there are only 4-billion IPv4 addresses. Instead, they’ll be behind NATs or accessed via IPv6, both of which prevent Mirai-style worms from functioning. Your fridge and toaster won’t connect via your home WiFi anyway, but via a 5G chip unrelated to your home.

Lastly, focusing on the vendor is a tired government cliche. Chronic internet security problems that go unsolved year after year, decade after decade, come from users failing, not vendors. Vendors quickly adapt, users don’t. The most important solutions to today’s IoT insecurities are to firewall and microsegment networks, something wholly within control of users, even home users. Yet government policy makers won’t consider the most important solutions, because their goal is less cybersecurity itself and more how cybersecurity can further their political interests. 

The best government policy for IoT policy is to do nothing, or at least focus on more relevant solutions than patching vulns. The ideas propose above will add costs to devices while making insignificant benefits to security. Yes, we will have IoT security issues in the future, but they will be new and interesting ones, requiring different solutions than the ones proposed.

Lessons from nPetya one year later

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/06/lessons-from-npetya-one-year-later.html

This is the one year anniversary of NotPetya. It was probably the most expensive single hacker attack in history (so far), with FedEx estimating it cost them $300 million. Shipping giant Maersk and drug giant Merck suffered losses on a similar scale. Many are discussing lessons we should learn from this, but they are the wrong lessons.

An example is this quote in a recent article:

“One year on from NotPetya, it seems lessons still haven’t been learned. A lack of regular patching of outdated systems because of the issues of downtime and disruption to organisations was the path through which both NotPetya and WannaCry spread, and this fundamental problem remains.” 

This is an attractive claim. It describes the problem in terms of people being “weak” and that the solution is to be “strong”. If only organizations where strong enough, willing to deal with downtime and disruption, then problems like this wouldn’t happen.

But this is wrong, at least in the case of NotPetya.

NotPetya’s spread was initiated through the Ukraining company MeDoc, which provided tax accounting software. It had an auto-update process for keeping its software up-to-date. This was subverted in order to deliver the initial NotPetya infection. Patching had nothing to do with this. Other common security controls like firewalls were also bypassed.

Auto-updates and cloud-management of software and IoT devices is becoming the norm. This creates a danger for such “supply chain” attacks, where the supplier of the product gets compromised, spreading an infection to all their customers. The lesson organizations need to learn about this is how such infections can be contained. One way is to firewall such products away from the core network. Another solution is port-isolation/microsegmentation, that limits the spread after an initial infection.

Once NotPetya got into an organization, it spread laterally. The chief way it did this was through Mimikatz/PsExec, reusing Windows credentials. It stole whatever login information it could get from the infected machine and used it to try to log on to other Windows machines. If it got lucky getting domain administrator credentials, it then spread to the entire Windows domain. This was the primary method of spreading, not the unpatched ETERNALBLUE vulnerability. This is why it was so devastating to companies like Maersk: it wasn’t a matter of a few unpatched systems getting infected, it was a matter of losing entire domains, including the backup systems.

Such spreading through Windows credentials continues to plague organizations. A good example is the recent ransomware infection of the City of Atlanta that spread much the same way. The limits of the worm were the limits of domain trust relationships. For example, it didn’t infect the city airport because that Windows domain is separate from the city’s domains.

This is the most pressing lesson organizations need to learn, the one they are ignoring. They need to do more to prevent desktops from infecting each other, such as through port-isolation/microsegmentation. They need to control the spread of administrative credentials within the organization. A lot of organizations put the same local admin account on every workstation which makes the spread of NotPetya style worms trivial. They need to reevaluate trust relationships between domains, so that the admin of one can’t infect the others.

These solutions are difficult, which is why news articles don’t mention them. You don’t have to know anything about security to proclaim “the problem is lack of patches”. It’s moral authority, chastising the weak, rather than a proscription of what to do. Solving supply chain hacks and Windows credential sharing, though, is hard. I don’t know any universal solution to this — I’d have to thoroughly analyze your network and business in order to make any useful recommendation. Such complexity means it’s not going to appear in news stories — they’ll stick with the simple soundbites instead.

By the way, this doesn’t mean ETERNALBLUE was inconsequential in NotPetya’s spread. Imagine an organization that is otherwise perfectly patched, except for that one out-dated test system that was unpatched — which just so happened to have an admin logged in. It hops from the accounting desktop (with the autoupdate) to the test system via ETERNALBLUE, then from the test system to the domain controller via the admin credentials, and then to the rest of the domain. What this story demonstrates is not the importance of keeping 100% up-to-date on patches, because that’s impossible: there will always be a system lurking somewhere unpatched. Instead, the lesson is the importance of not leaving admin credentials lying around.

So the lessons you need to learn from NotPetya is not keeping systems patched, but instead dealing with hostile autoupdates coming deep within your network, and most importantly, stopping the spread of malware through trust relationships and loose admin credentials lying around.

SMB version detection in masscan

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/06/smb-version-detection-in-masscan.html

My Internet-scale port scanner, masscan, supports “banner checking”, grabbing basic information from a service after it connects to a port. It’s less comprehensive than nmap‘s version and scripting checks, but it’s better than just recording which ports are open.

I recently extended this banner checking to include SMB. It’s a complicated protocol so requires a lot more work than just grabbing text banners like you see on FTP. Implementing this, I’ve found that nmap and smbclient often fail to get version information. They seem focused on getting the information from a standard location in SMBv1 packets, which gives a text string indicating version. There’s another place you get get it, from the NTLMSSP pluggable authentication chunks, which gives version numbers in the form of major version, minor version. and build number. Sometimes the SMBv1 information is missing, either because newer Windows version disable SMBv1 by default (supporting only SMBv2) or because they’ve disabled null/anonymous sessions. They still give NTLMSSP version info, though.
For example, running masscan in my local bar, I get the following result:
Banner on port 445/tcp on 10.1.10.200: [smb] SMBv1  time=2018-06-24 22:18:13 TZ=+240  domain=SHIPBARBO version=6.1.7601 ntlm-ver=15 domain=SHIPBARBO name=SHIPBARBO domain-dns=SHIPBARBO name-dns=SHIPBARBO os=Windows Embedded Standard 7601 Service Pack 1 ver=Windows Embedded Standard 6.1
The top version string comes from NTLMSSP, with 6.1.7601, which means Windows 6.1 (Win7) build number 7601. The bottom version string comes from the SMBv1 packets, which consists of strings.
The nmap and smbclient programs will get the SMBv1 part, but not the NTLMSSP part.
This seems to be a problem with Rapid7’s “National Exposure Index” which tracks SMB exposure (amongst other things). It’s missing about 300,000 machines that report NT_STATUS_ACCESS_DENIED from smbclient rather than the numeric version info from NTLMSSP authentication.
The smbclient information does have the information internally. For example, you could run the following command to put the debug level at ’10’ to grab it:
$ smbclient -U “” -N -L 10.1.10.95 -d10
You’ll get something like the following output:
It appears to get the Windows 6.1 numbers, though for some reason it’s missing the build number.
To run masscan to grab this, run:
# masscan –banners -p445 10.1.10.95
In the above example, I also used the “–hello smbv1” parameter, to grab both the SMBv1 and NTLMSSP version info. Otherwise, it’ll default to SMBv2 if available, and only return:
Discovered open port 445/tcp on 10.1.10.95
Banner on port 445/tcp on 10.1.10.95: [smb] SMBv2  guid=6db701a0-a419-4be9-9084-6052b19a2e56 time=2018-06-24 22:37:42  domain=SHIPSERVER version=6.1.7601 ntlm-ver=15 domain=SHIPSERVER name=SHIPSERVER domain-dns=SHIPSERVER name-dns=SHIPSERVER
Note if you do a port 445 scan of the entire Internet, you’ll get about 3,000,000 responses. You probably don’t want to script running 3 million instances of masscan, but instead run it once for all those addresses. Do do this, run:
# masscan –banners -p445 -iL ips.txt –rate 100000
This will load the IP addresses from the file “ips.txt“. The format is one address, CIDR address, or range per line, with lines starting with # ignored as comments. It’ll take about 10 seconds to read in a file containing 3 million addresses, so don’t be worried if it seems to hang for a bit. It doesn’t matter if you sort the file or not: masscan sorts the file itself internally, then randomizes the order when transmitting packets.
By default, masscan transmits at a rate of 100 packets per second, in order to avoid accidentally melting networks. You’ll probably want a faster rate, such as 100,000 packets per second. Masscan will give a status line estimating completion time, but in this case, it’ll be wildly inaccurate. The estimate is based upon getting no response from servers, which is the norm when doing massive scans. But in this case, all the servers will response which will cause masscan to send at least an ACK packet followed by at least one data packet. This will usually continue with two more data packets and some FINs and FIN-ACKs. All these extra packets fit within the “rate” specified, which means the effective rate at establishing new connections will be a lot lower than the estimate.
If you want to just scan the entire Internet for SMB on port 445, the command would be:
# masscan –banners -p445 0.0.0.0/0 –rate 100000

I love scanning the /0 subnet.
The version information can be in different locations in the output line, depending on the target. To extract it, you can use grep:
grep -Eo “version=[0-9\.]*” scan.txt
Or, to grab only the numbers portion:
grep -Eo “version=[0-9\.]*” scan.txt | cut -d= -f2

To interpret the version numbers, this seems to be a good resource. I’m repeating the details here in case the link rots:
Operating System Version Details
Version Number
Windows 10 Windows 10 (1803) 10.0.17134
Windows 10 (1709) 10.0.16299
Windows 10 (1703) 10.0.15063
Windows 10 (1607) 10.0.14393
Windows 10 (1511) 10.0.10586
Windows 10 10.0.10240
Windows 8 Windows 8.1 (Update 1) 6.3.9600
Windows 8.1 6.3.9200
Windows 8 6.2.9200
Windows 7 Windows 7 SP1 6.1.7601
Windows 7 6.1.7600
Windows Vista Windows Vista SP2 6.0.6002
Windows Vista SP1 6.0.6001
Windows Vista 6.0.6000
The various Windows server version overlap these as well.
You can get the latest version of masscan from GitHub. It doesn’t have any dependencies to build it other than a compiler (gcc or clang). It does need libpcap installed to run. It also needs root privileges to run, like any other libpcap application, or you setuid it. Lastly, since masscan has it’s own IP address, you need to either use –source-ip [ip] to use a different IP address on your local subnet, or use –source-port [port] to use a source port you’ve otherwise firewalled to prevent the local stack from using it. Otherwise, the local stack will generate RST packets, preventing a connection from being established to grab the banner.
$ sudo apt-get install build-essential git
$ git clone https://github.com/robertdavidgraham/masscan
$ cd masscan
$ make
$ sudo iptables -A INPUT -p tcp –dport 60000 -j DROP
$ sudo bin/masscan –source-port 60000 -p445 –banners ….
By default, masscan waits 10 seconds for any responses to come back after a scan is complete. I add the parameter “–wait 40” to extend that to 40 seconds. Connections longer than 30 seconds are killed anyway due to timeout, so it’s not really worth it to wait much longer than 30 seconds.
There’s a lot of junk out there on port 445. Among the interesting stuff is that there are a lot of honeypots out there looking for scanners and worms. When you do a scan on this port, you’ll get a lot of scans coming back at you for a couple days from such honeypots. One of the healthy things about using a spoofed source IP address is that you’ll avoid the noise caused by these scans. Since I always spoof the source address in my scans (–source ip [ip]) I’ll also set –wait forever as a parameter, to keep masscan running even after it’s transmitted all its packets. This keeps it responding to ARP requests from the local router, so that I can also run tcpdump to capture all the noise that happens after a scan, for a couple days. Otherwise, if a stack with that IP address doesn’t exist, the router will drop the packets instead of forwarding them, so you can’t tcpdump capture them.
So the full command line might be:
# masscan –banners -p445 0.0.0.0/0 –rate 100000 –source-port 60000 –wait 40 > scan.txt

Conclusion

I’ve added SMB version checking natively to masscan. While simple in theory, his actually gets a bit complex, as described above. SMB is a nasty protocol, so a custom implementation like in masscan will get different results, for various reasons, then you might get with the Samba tool smbclient or nmap.

Notes on "The President is Missing"

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/06/notes-on-president-is-missing.html

Former president Bill Clinton has contributed to a cyberthriller “The President is Missing”, the plot of which is that the president stops a cybervirus from destroying the country. This is scary, because people in Washington D.C. are going to read this book, believe the hacking portrayed has some basis in reality, and base policy on it. This “news analysis” piece in the New York Times is a good example, coming up with policy recommendations based on fictional cliches rather than a reality of what hackers do.

The cybervirus in the book is some all powerful thing, able to infect everything everywhere without being detected. This is fantasy no more real than magic and faeries. Sure, magical faeries is a popular basis for fiction, but in this case, it’s lazy fantasy, a cliche. In fiction, viruses are rarely portrayed as anything other than all powerful.

But in the real world, viruses have important limitations. If you knew anything about computer viruses, rather than being impressed by what they can do, you’d be disappointed by what they can’t.

Go look at your home router. See the blinky lights. The light flashes every time a packet of data goes across the network. Packets can’t be sent without a light blinking. Likewise, viruses cannot spread themselves over a network, or communicate with each other, without somebody noticing — especially a virus that’s supposedly infected a billion devices as in the book.

The same is true of data on the disk. All the data is accounted for. It’s rather easy for professionals to see when data (consisting of the virus) has been added. The difficulty of anti-virus software is not in detecting when something new has been added to a system, but automatically determining whether it’s benign or malicious. When viruses are able to evade anti-virus detection, it’s because they’ve been classified as non-hostile, not because they are invisible.

Such evasion only works when hackers have a focused target. As soon as a virus spreads too far, anti-virus companies will get a sample, classify as malicious, and spread the “signatures” out to the world. That’s what happened with Stuxnet, a focused attack on Iran’s nuclear enrichment program that eventually spread too far and got detected. It’s implausible that anything can spread to a billion systems without anti-virus companies getting a sample and correctly classifying it.

In the book, the president creates a team of the 30 brightest cybersecurity minds the country has, from government, the private sector, and even convicted hackers on parole from jail — each more brilliant than the last. This is yet another lazy cliche about genius hackers.

The cliche comes from the fact that it’s rather easy to impress muggles with magic tricks. As soon as somebody shows an ability to do something you don’t know how to do, they become a cyber genius in your mind. The reality is that cybersecurity/hacking is no different than any other profession, no more dominated by “genius” than bridge engineering or heart surgery. It’s a skill that takes both years of study as well as years of experience.

So whenever the president, ignorant of computers, puts together a team of 30 cyber geniuses, they aren’t going to be people of competence. They are going to be people good at promoting themselves, taking credit for other people’s work, or political engineering. They won’t be technical experts, they’ll be people like Rudi Giuliani or Richard Clarke, who have been tapped by presidents as cyber experts despite knowing less than nothing about computers.

A funny example of this is Marcus Hutchins. He’s a virus researcher of typical skill and experience, but was catapulted to fame by finding the “kill switch” in the famous Wannacry virus. In truth, he just got lucky, being just the first to find the kill switch that would’ve soon been found by another researcher (it was pretty obvious). But the press set him up as one of the top 5 experts in the world. That’s silly, because there is no such thing, like there’s no “top 5 neurosurgeons” or “top 5 bridge engineers”. Hutchins is certainly skilled enough to merit a solid 6 figure salary, but such “top cyber geniuses” don’t exist.

I mention Hutchins because months after the famed Wannacry incident, he was arrested in conjunction with an unrelated Russian banking virus. Assuming everything in his indictment is true, it still makes him only a minor figure with a few youthful indiscretions. It’s likely this confusion between “fame” and “cyber genius” catapulted him into being a major person of interest in their investigations.

The book discusses the recent major cyberattacks in the news, like Mirai, Wannacry, and nPetya, but they are distorted misunderstandings of what happened. For example, it explains DDoS:

A DDoS attack is a distribute denial-of-service attack. A flood attack, essentially, on the network of servers that convert the addresses we type into our browsers into IP numbers that the internet routers use.

This is only partial right, but mainly wrong. DDoS is any sort of flood from multiple sources distributed around the Internet, against any target. It’s only the Mirai attack, the most recent famous DDoS, that attacked the name servers that convert addresses to numbers.

The same sort of misconceptions are rife in Washington. Mirai, Wannacry, and nPetya spawned a slew of policy recommendations that get the technical details wrong. Politicians reading this Clinton thriller will just get more wrong.

In terms of fiction, the lazy cliches and superficial understand of cybersecurity will be hard for people of intelligence to stomach. However, the danger I want to point out is that people in Washington D.C., politicians who make policy, will read this book. Their understanding of how cyber works will come from such books. And it will be wrong.

The First Lady’s bad cyber advice

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/05/the-first-ladys-bad-cyber-advice.html

First Lady Melania Trump announced a guide to help children go online safely. It has problems.

Melania’s guide is full of outdated, impractical, inappropriate, and redundant information. But that’s allowed, because it relies upon moral authority: to be moral is to be secure, to be moral is to do what the government tells you. It matters less whether the advice is technically accurate, and more that you are supposed to do what authority tells you.

That’s a problem, not just with her guide, but most cybersecurity advice in general. Our community gives out advice without putting much thought into it, because it doesn’t need thought. You should do what we tell you, because being secure is your moral duty.

This post picks apart Melania’s document. The purpose isn’t to fine-tune her guide and make it better. Instead, the purpose is to demonstrate the idea of resting on moral authority instead of technical authority.

Strong Passwords

“Strong passwords” is the quintessential cybersecurity cliché that insecurity is due to some “weakness” (laziness, ignorance, greed, etc.) and the remedy is to be “strong”.

The first flaw is that this advice is outdated. Ten years ago, important websites would frequently get hacked and have poor password protection (like MD5 hashing). Back then, strength mattered, to stop hackers from brute force guessing the hacked passwords. These days, important websites get hacked less often and protect the passwords better (like salted bcrypt). Moreover, the advice is now often redundant: websites, at least the important ones, enforce a certain level of password complexity, so that even without advice, you’ll be forced to do the right thing most of the time.

This advice is outdated for a second reason: hackers have gotten a lot better at cracking passwords. Ten years ago, they focused on brute force, trying all possible combinations. Partly because passwords are now protected better, dramatically reducing the effectiveness of the brute force approach, hackers have had to focus on other techniques, such as the mutated dictionary and Markov chain attacks. Consequently, even though “Password123!” seems to meet the above criteria of a strong password, it’ll fall quickly to a mutated dictionary attack. The simple recommendation of “strong passwords” is no longer sufficient.

The last part of the above advice is to avoid password reuse. This is good advice. However, this becomes impractical advice, especially when the user is trying to create “strong” complex passwords as described above. There’s no way users/children can remember that many passwords. So they aren’t going to follow that advice.

To make the advice work, you need to help users with this problem. To begin with, you need to tell them to write down all their passwords. This is something many people avoid, because they’ve been told to be “strong” and writing down passwords seems “weak”. Indeed it is, if you write them down in an office environment and stick them on a note on the monitor or underneath the keyboard. But they are safe and strong if it’s on paper stored in your home safe, or even in a home office drawer. I write my passwords on the margins in a book on my bookshelf — even if you know that, it’ll take you a long time to figure out which book when invading my home.

The other option to help avoid password reuse is to use a password manager. I don’t recommend them to my own parents because that’d be just one more thing I’d have to help them with, but they are fairly easy to use. It means you need only one password for the password manager, which then manages random/complex passwords for all your web accounts.

So what we have here is outdated and redundant advice that overshadows good advice that is nonetheless incomplete and impractical. The advice is based on the moral authority of telling users to be “strong” rather than the practical advice that would help them.

No personal info unless website is secure

The guide teaches kids to recognize the difference between a secure/trustworthy and insecure website. This is laughably wrong.

HTTPS means the connection to the website is secure, not that the website is secure. These are different things. It means hackers are unlikely to be able to eavesdrop on the traffic as it’s transmitted to the website. However, the website itself may be insecure (easily hacked), or worse, it may be a fraudulent website created by hackers to appear similar to a legitimate website.

What HTTPS secures is a common misconception, perpetuated by guides like this. This is the source of criticism for LetsEncrypt, an initiative to give away free website certificates so that everyone can get HTTPS. Hackers now routinely use LetsEncrypt to create their fraudulent websites to host their viruses. Since people have been taught forever that HTTPS means a website is “secure”, people are trusting these hacker websites.

But LetsEncrypt is a good thing, all connections should be secure. What’s bad is not LetsEncrypt itself, but guides like this from the government that have for years been teaching people the wrong thing, that HTTPS means a website is secure.

Backups

Of course, no guide would be complete without telling people to backup their stuff.

This is especially important with the growing ransomware threat. Ransomware is a type of virus/malware that encrypts your files then charges you money to get the key to decrypt the files. Half the time this just destroys the files.

But this again is moral authority, telling people what to do, instead of educating them how to do it. Most will ignore this advice because they don’t know how to effectively backup their stuff.

For most users, it’s easy to go to the store and buy a 256-gigabyte USB drive for $40 (as of May 2018) then use the “Timemachine” feature in macOS, or on Windows the “File History” feature or the “Backup and Restore” feature. These can be configured to automatically do the backup on a regular basis so that you don’t have to worry about it.

But such “local” backups are still problematic. If the drive is left plugged into the machine, ransomeware can attack the backup. If there’s a fire, any backup in your home will be destroyed along with the computer.

I recommend cloud backup instead. There are so many good providers, like DropBox, Backblaze, Microsoft, Apple’s iCloud, and so on. These are especially critical for phones: if your iPhone is destroyed or stolen, you can simply walk into an Apple store and buy a new one, with everything replaced as it was from their iCloud.

But all of this is missing the key problem: your photos. You carry a camera with you all the time now and take a lot of high resolution photos. This quickly exceeds the capacity of most of the free backup solutions. You can configure these, such as you phone’s iCloud backup, to exclude photos, but that means you are prone to losing your photos/memories. For example, Drop Box is great for the free 5 gigabyte service, but if I want to preserve photos on it, I have to pay for their more expensive service.

One of the key messages kids should learn about photos is that they will likely lose most all of the photos they’ve taken within 5 years. The exceptions will be the few photos they’ve posted to social media, which sorta serves as a cloud backup for them. If they want to preserve the rest of these memories, the kids need to take seriously finding backup solutions. I’m not sure of the best solution, but I buy big USB flash drives and send them to my niece asking her to copy all her photos to them, so that at least I can put that in a safe.

One surprisingly good solution is Microsoft Office 365. For $99 a year, you get a copy of their Office software (which I use) but it also comes with a large 1-terabyte of cloud storage, which is likely big enough for your photos. Apple charges around the same amount for 1-terabyte of iCloud, though it doesn’t come with a free license for Microsoft Office :-).

WiFi encryption

Your home WiFi should be encrypted, of course.

I have to point out the language, though. Turning on WPA2 WiFi encryption does not “secure your network”. Instead, it just secures the radio signals from being eavesdropped. Your network may have other vulnerabilities, where encryption won’t help, such as when your router has remote administration turned on with a default or backdoor password enabled.

I’m being a bit pedantic here, but it’s not my argument. It’s the FTC’s argument when they sued vendors like D-Link for making exactly the same sort of recommendation. The FTC claimed it was deceptive business practice because recommending users do things like this still didn’t mean the device was “secure”. Since the FTC is partly responsible for writing Melania’s document, I find this a bit ironic.

In any event, WPA2 personal has problems where it can be hacked, such as if WPS is enabled, or evil twin access-points broadcasting stronger (or more directional) signals. It’s thus insufficient security. To be fully secure against possible WiFi eavesdropping you need to enable enterprise WPA2, which isn’t something most users can do.

Also, WPA2 is largely redundant. If you wardrive your local neighborhood you’ll find that almost everyone has WPA enabled already anyway. Guides like this probably don’t need to advise what everyone’s already doing, especially when it’s still incomplete.

Change your router password

Yes, leaving the default password on your router is a problem, as shown by recent Mirai-style attacks, such as the very recent ones where Russia has infected 500,000 in their cyberwar against Ukraine. But those were only a problem because routers also had remote administration enabled. It’s remote administration you need to make sure is disabled on your router, regardless if you change the default password (as there are other vulnerabilities besides passwords). If remote administration is disabled, then it’s very rare that people will attack your router with the default password.

Thus, they ignore the important thing (remote administration) and instead focus on the less important thing (change default password).

In addition, this advice again the impractical recommendation of choosing a complex (strong) password. Users who do this usually forget it by the time they next need it. Practical advice is to recommend users write down the password they choose, and put it either someplace they won’t forget (like with the rest of their passwords), or on a sticky note under the router.

Update router firmware

Like any device on the network, you should keep it up-to-date with the latest patches. But you aren’t going to, because it’s not practical. While your laptop/desktop and phone nag you about updates, your router won’t. Whereas phones/computers update once a month, your router vendor will update the firmware once a year — and after a few years, stop releasing any more updates at all.

Routers are just one of many IoT devices we are going to have to come to terms with, keeping them patched. I don’t know the right answer. I check my parents stuff every Thanksgiving, so maybe that’s a good strategy: patch your stuff at the end of every year. Maybe some cultural norms will develop, but simply telling people to be strong about their IoT firmware patches isn’t going to be practical in the near term.

Don’t click on stuff

This probably the most common cybersecurity advice given by infosec professionals. It is wrong.

Emails/messages are designed for you to click on things. You regularly get emails/messages from legitimate sources that demand you click on things. It’s so common from legitimate sources that there’s no practical way for users to distinguish between them and bad sources. As that Google Docs bug showed, even experts can’t always tell the difference.

I mean, it’s true that phishing attacks coming through emails/messages try to trick you into clicking on things, and you should be suspicious of such things. However, it doesn’t follow from this that not clicking on things is a practical strategy. It’s like diet advice recommending you stop eating food altogether.

Sex predators, oh my!

Of course, its kids going online, so of course you are going to have warnings about sexual predators:

But online predators are rare. The predator threat to children is overwhelmingly from relatives and acquaintances, a much smaller threat from strangers, and a vanishingly tiny threat from online predators. Recommendations like this stem from our fears of the unknown technology rather than a rational measurement of the threat.

Sexting, oh my!

So here is one piece of advice that I can agree with: don’t sext:

But the reason this is bad is not because it’s immoral or wrong, but because adults have gone crazy and made it illegal for children to take nude photographs of themselves. As this article points out, your child is more likely to get in trouble and get placed on the sex offender registry (for life) than to get molested by a person on that registry.

Thus, we need to warn kids not from some immoral activity, but from adults who’ve gotten freaked out about it. Yes, sending pictures to your friends/love-interest will also often get you in trouble as those images will frequently get passed around school, but such temporary embarrassments will pass. Getting put on a sex offender registry harms you for life.

Texting while driving

Finally, I want to point out this error:

The evidence is to the contrary, that it’s not actually dangerous — it’s just assumed to be dangerous. Texting rarely distracts drivers from what’s going on the road. It instead replaces some other inattention, such as day dreaming, fiddling with the radio, or checking yourself in the mirror. Risk compensation happens, when people are texting while driving, they are also slowing down and letting more space between them and the car in front of them.

Studies have shown this. For example, one study measured accident rates at 6:59pm vs 7:01pm and found no difference. That’s when “free evening texting” came into effect, so we should’ve seen a bump in the number of accidents. They even tried to narrow the effect down, such as people texting while changing cell towers (proving they were in motion).

Yes, texting is illegal, but that’s because people are fed up with the jerk in front of them not noticing the light is green. It’s not illegal because it’s particularly dangerous, that it has a measurable impact on accident rates.

Conclusion

The point of this post is not to refine the advice and make it better. Instead, I attempt to demonstrate how such advice rests on moral authority, because it’s the government telling you so. It’s because cybersecurity and safety are higher moral duties. Much of it is outdated, impractical, inappropriate, and redundant.
We need to move away from this sort of advice. Instead of moral authority, we need technical authority. We need to focus on the threats that people actually face, and instead of commanding them what to do. We need to help them be secure, not command to command them, shaming them for their insecurity. It’s like Strunk and White’s “Elements of Style”: they don’t take the moral authority approach and tell people how to write, but instead try to help people how to write well.

The devil wears Pravda

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/05/the-devil-wears-pravda.html

Classic Bond villain, Elon Musk, has a new plan to create a website dedicated to measuring the credibility and adherence to “core truth” of journalists. He is, without any sense of irony, going to call this “Pravda”. This is not simply wrong but evil.

Musk has a point. Journalists do suck, and many suck consistently. I see this in my own industry, cybersecurity, and I frequently criticize them for their suckage.

But what he’s doing here is not correcting them when they make mistakes (or what Musk sees as mistakes), but questioning their legitimacy. This legitimacy isn’t measured by whether they follow established journalism ethics, but whether their “core truths” agree with Musk’s “core truths”.

An example of the problem is how the press fixates on Tesla car crashes due to its “autopilot” feature. Pretty much every autopilot crash makes national headlines, while the press ignores the other 40,000 car crashes that happen in the United States each year. Musk spies on Tesla drivers (hello, classic Bond villain everyone) so he can see the dip in autopilot usage every time such a news story breaks. He’s got good reason to be concerned about this.

He argues that autopilot is safer than humans driving, and he’s got the statistics and government studies to back this up. Therefore, the press’s fixation on Tesla crashes is illegitimate “fake news”, titillating the audience with distorted truth.

But here’s the thing: that’s still only Musk’s version of the truth. Yes, on a mile-per-mile basis, autopilot is safer, but there’s nuance here. Autopilot is used primarily on freeways, which already have a low mile-per-mile accident rate. People choose autopilot only when conditions are incredibly safe and drivers are unlikely to have an accident anyway. Musk is therefore being intentionally deceptive comparing apples to oranges. Autopilot may still be safer, it’s just that the numbers Musk uses don’t demonstrate this.

And then there is the truth calling it “autopilot” to begin with, because it isn’t. The public is overrating the capabilities of the feature. It’s little different than “lane keeping” and “adaptive cruise control” you can now find in other cars. In many ways, the technology is behind — my Tesla doesn’t beep at me when a pedestrian walks behind my car while backing up, but virtually every new car on the market does.

Yes, the press unduly covers Tesla autopilot crashes, but Musk has only himself to blame by unduly exaggerating his car’s capabilities by calling it “autopilot”.

What’s “core truth” is thus rather difficult to obtain. What the press satisfies itself with instead is smaller truths, what they can document. The facts are in such cases that the accident happened, and they try to get Tesla or Musk to comment on it.

What you can criticize a journalist for is therefore not “core truth” but whether they did journalism correctly. When such stories criticize “autopilot”, but don’t do their diligence in getting Tesla’s side of the story, then that’s a violation of journalistic practice. When I criticize journalists for their poor handling of stories in my industry, I try to focus on which journalistic principles they get wrong. For example, the NYTimes reporters do a lot of stories quoting anonymous government sources in clear violation of journalistic principles.

If “credibility” is the concern, then it’s the classic Bond villain here that’s the problem: Musk himself. His track record on business statements is abysmal. For example, when he announced the Model 3 he claimed production targets that every Wall Street analyst claimed were absurd. He didn’t make those targets, he didn’t come close. Model 3 production is still lagging behind Musk’s twice adjusted targets.

https://www.bloomberg.com/graphics/2018-tesla-tracker/

So who has a credibility gap here, the press, or Musk himself?

Not only is Musk’s credibility problem ironic, so is the name he chose, “Pravada”, the Russian word for truth that was the name of the Soviet Union Communist Party’s official newspaper. This is so absurd this has to be a joke, yet Musk claims to be serious about all this.

Yes, the press has a lot of problems, and if Musk were some journalism professor concerned about journalists meeting the objective standards of their industry (e.g. abusing anonymous sources), then this would be a fine thing. But it’s not. It’s Musk who is upset the press’s version of “core truth” does not agree with his version — a version that he’s proven time and time again differs from “real truth”.

Just in case Musk is serious, I’ve already registered “www.antipravda.com” to start measuring the credibility of statements by billionaire playboy CEOs. Let’s see who blinks first.


I stole the title, with permission, from this tweet:

C is to low level

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/05/c-is-too-low-level.html

I’m in danger of contradicting myself, after previously pointing out that x86 machine code is a high-level language, but this article claiming C is a not a low level language is bunk. C certainly has some problems, but it’s still the closest language to assembly. This is obvious by the fact it’s still the fastest compiled language. What we see is a typical academic out of touch with the real world.

The author makes the (wrong) observation that we’ve been stuck emulating the PDP-11 for the past 40 years. C was written for the PDP-11, and since then CPUs have been designed to make C run faster. The author imagines a different world, such as where CPU designers instead target something like LISP as their preferred language, or Erlang. This misunderstands the state of the market. CPUs do indeed supports lots of different abstractions, and C has evolved to accommodate this.


The author criticizes things like “out-of-order” execution which has lead to the Spectre sidechannel vulnerabilities. Out-of-order execution is necessary to make C run faster. The author claims instead that those resources should be spent on having more slower CPUs, with more threads. This sacrifices single-threaded performance in exchange for a lot more threads executing in parallel. The author cites Sparc Tx CPUs as his ideal processor.

But here’s the thing, the Sparc Tx was a failure. To be fair, it’s mostly a failure because most of the time, people wanted to run old C code instead of new Erlang code. But it was still a failure at running Erlang.

Time after time, engineers keep finding that “out-of-order”, single-threaded performance is still the winner. A good example is ARM processors for both mobile phones and servers. All the theory points to in-order CPUs as being better, but all the products are out-of-order, because this theory is wrong. The custom ARM cores from Apple and Qualcomm used in most high-end phones are so deeply out-of-order they give Intel CPUs competition. The same is true on the server front with the latest Qualcomm Centriq and Cavium ThunderX2 processors, deeply out of order supporting more than 100 instructions in flight.

The Cavium is especially telling. Its ThunderX CPU had 48 simple cores which was replaced with the ThunderX2 having 32 complex, deeply out-of-order cores. The performance increase was massive, even on multithread-friendly workloads. Every competitor to Intel’s dominance in the server space has learned the lesson from Sparc Tx: many wimpy cores is a failure, you need fewer beefy cores. Yes, they don’t need to be as beefy as Intel’s processors, but they need to be close.

Even Intel’s “Xeon Phi” custom chip learned this lesson. This is their GPU-like chip, running 60 cores with 512-bit wide “vector” (sic) instructions, designed for supercomputer applications. Its first version was purely in-order. Its current version is slightly out-of-order. It supports four threads and focuses on basic number crunching, so in-order cores seems to be the right approach, but Intel found in this case that out-of-order processing still provided a benefit. Practice is different than theory.

As an academic, the author of the above article focuses on abstractions. The criticism of C is that it has the wrong abstractions which are hard to optimize, and that if we instead expressed things in the right abstractions, it would be easier to optimize.

This is an intellectually compelling argument, but so far bunk.

The reason is that while the theoretical base language has issues, everyone programs using extensions to the language, like “intrinsics” (C ‘functions’ that map to assembly instructions). Programmers write libraries using these intrinsics, which then the rest of the normal programmers use. In other words, if your criticism is that C is not itself low level enough, it still provides the best access to low level capabilities.

Given that C can access new functionality in CPUs, CPU designers add new paradigms, from SIMD to transaction processing. In other words, while in the 1980s CPUs were designed to optimize C (stacks, scaled pointers), these days CPUs are designed to optimize tasks regardless of language.

The author of that article criticizes the memory/cache hierarchy, claiming it has problems. Yes, it has problems, but only compared to how well it normally works. The author praises the many simple cores/threads idea as hiding memory latency with little caching, but misses the point that caches also dramatically increase memory bandwidth. Intel processors are optimized to read a whopping 256 bits every clock cycle from L1 cache. Main memory bandwidth is orders of magnitude slower.

The author goes onto criticize cache coherency as a problem. C uses it, but other languages like Erlang don’t need it. But that’s largely due to the problems each languages solves. Erlang solves the problem where a large number of threads work on largely independent tasks, needing to send only small messages to each other across threads. The problems C solves is when you need many threads working on a huge, common set of data.

For example, consider the “intrusion prevention system”. Any thread can process any incoming packet that corresponds to any region of memory. There’s no practical way of solving this problem without a huge coherent cache. It doesn’t matter which language or abstractions you use, it’s the fundamental constraint of the problem being solved. RDMA is an important concept that’s moved from supercomputer applications to the data center, such as with memcached. Again, we have the problem of huge quantities (terabytes worth) shared among threads rather than small quantities (kilobytes).

The fundamental issue the author of the the paper is ignoring is decreasing marginal returns. Moore’s Law has gifted us more transistors than we can usefully use. We can’t apply those additional registers to just one thing, because the useful returns we get diminish.

For example, Intel CPUs have two hardware threads per core. That’s because there are good returns by adding a single additional thread. However, the usefulness of adding a third or fourth thread decreases. That’s why many CPUs have only two threads, or sometimes four threads, but no CPU has 16 threads per core.

You can apply the same discussion to any aspect of the CPU, from register count, to SIMD width, to cache size, to out-of-order depth, and so on. Rather than focusing on one of these things and increasing it to the extreme, CPU designers make each a bit larger every process tick that adds more transistors to the chip.

The same applies to cores. It’s why the “more simpler cores” strategy fails, because more cores have their own decreasing marginal returns. Instead of adding cores tied to limited memory bandwidth, it’s better to add more cache. Such cache already increases the size of the cores, so at some point it’s more effective to add a few out-of-order features to each core rather than more cores. And so on.

The question isn’t whether we can change this paradigm and radically redesign CPUs to match some academic’s view of the perfect abstraction. Instead, the goal is to find new uses for those additional transistors. For example, “message passing” is a useful abstraction in languages like Go and Erlang that’s often more useful than sharing memory. It’s implemented with shared memory and atomic instructions, but I can’t help but think it couldn’t better be done with direct hardware support.

Of course, as soon as they do that, it’ll become an intrinsic in C, then added to languages like Go and Erlang.

Summary

Academics live in an ideal world of abstractions, the rest of us live in practical reality. The reality is that vast majority of programmers work with the C family of languages (JavaScript, Go, etc.), whereas academics love the epiphanies they learned using other languages, especially function languages. CPUs are only superficially designed to run C and “PDP-11 compatibility”. Instead, they keep adding features to support other abstractions, abstractions available to C. They are driven by decreasing marginal returns — they would love to add new abstractions to the hardware because it’s a cheap way to make use of additional transitions. Academics are wrong believing that the entire system needs to be redesigned from scratch. Instead, they just need to come up with new abstractions CPU designers can add.

masscan, macOS, and firewall

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/05/masscan-macos-and-firewall.html

One of the more useful features of masscan is the “–banners” check, which connects to the TCP port, sends some request, and gets a basic response back. However, since masscan has it’s own TCP stack, it’ll interfere with the operating system’s TCP stack if they are sharing the same IPv4 address. The operating system will reply with a RST packet before the TCP connection can be established.

The way to fix this is to use the built-in packet-filtering firewall to block those packets in the operating-system TCP/IP stack. The masscan program still sees everything before the packet-filter, but the operating system can’t see anything after the packet-filter.

Note that we are talking about the “packet-filter” firewall feature here. Remember that macOS, like most operating systems these days, has two separate firewalls: an application firewall and a packet-filter firewall. The application firewall is the one you see in System Settings labeled “Firewall”, and it controls things based upon the application’s identity rather than by which ports it uses. This is normally “on” by default. The packet-filter is normally “off” by default and is of little use to normal users.

Also note that macOS changed packet-filters around version 10.10.5 (“Yosemite”, October 2014). The older one is known as “ipfw“, which was the default firewall for FreeBSD (much of macOS is based on FreeBSD). The replacement is known as PF, which comes from OpenBSD. Whereas you used to use the old “ipfw” command on the command line, you now use the “pfctl” command, as well as the “/etc/pf.conf” configuration file.

What we need to filter is the source port of the packets that masscan will send, so that when replies are received, they won’t reach the operating-system stack, and just go to masscan instead. To do this, we need find a range of ports that won’t conflict with the operating system. Namely, when the operating system creates outgoing connections, it randomly chooses a source port within a certain range. We want to use masscan to use source ports in a different range.

To figure out the range macOS uses, we run the following command:

sysctl net.inet.ip.portrange.first net.inet.ip.portrange.last

On my laptop, which is probably the default for macOS, I get the following range. Sniffing with Wireshark confirms this is the range used for source ports for outgoing connections.

net.inet.ip.portrange.first: 49152
net.inet.ip.portrange.last: 65535

So this means I shouldn’t use source ports anywhere in the range 49152 to 65535. On my laptop, I’ve decided to use for masscan the ports 40000 to 41023. The range masscan uses must be a power of 2, so here I’m using 1024 (two to the tenth power).

To configure masscan, I can either type the parameter “–source-port 40000-41023” every time I run the program, or I can add the following line to /etc/masscan/masscan.conf. Remember that by default, masscan will look in that configuration file for any configuration parameters, so you don’t have to keep retyping them on the command line.

source-port = 40000-41023

Next, I need to add the following firewall rule to the bottom of /etc/pf.conf:

block in proto tcp from any to any port 40000 >< 41024

However, we aren’t done yet. By default, the packet-filter firewall is off on some versions of macOS. Therefore, every time you reboot your computer, you need to enable it. The simple way to do this is on the command line run:

pfctl -e

Or, if that doesn’t work, try:

pfctl -E

If the firewall is already running, then you’ll need to load the file explicitly (or reboot):

pfctl -f /etc/pf.conf

You can check to see if the rule is active:

pfctl -s rules

Some notes on eFail

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/05/some-notes-on-efail.html

I’ve been busy trying to replicate the “eFail” PGP/SMIME bug. I thought I’d write up some notes.

PGP and S/MIME encrypt emails, so that eavesdroppers can’t read them. The bugs potentially allow eavesdroppers to take the encrypted emails they’ve captured and resend them to you, reformatted in a way that allows them to decrypt the messages.

Disable remote/external content in email

The most important defense is to disable “external” or “remote” content from being automatically loaded. This is when HTML-formatted emails attempt to load images from remote websites. This happens legitimately when they want to display images, but not fill up the email with them. But most of the time this is illegitimate, they hide images on the webpage in order to track you with unique IDs and cookies. For example, this is the code at the end of an email from politician Bernie Sanders to his supporters. Notice the long random number assigned to track me, and the width/height of this image is set to one pixel, so you don’t even see it:

Such trackers are so pernicious they are disabled by default in most email clients. This is an example of the settings in Thunderbird:

The problem is that as you read email messages, you often get frustrated by the fact the error messages and missing content, so you keep adding exceptions:

The correct defense against this eFail bug is to make sure such remote content is disabled and that you have no exceptions, or at least, no HTTP exceptions. HTTPS exceptions (those using SSL) are okay as long as they aren’t to a website the attacker controls. Unencrypted exceptions, though, the hacker can eavesdrop on, so it doesn’t matter if they control the website the requests go to. If the attacker can eavesdrop on your emails, they can probably eavesdrop on your HTTP sessions as well.

Some have recommended disabling PGP and S/MIME completely. That’s probably overkill. As long as the attacker can’t use the “remote content” in emails, you are fine. Likewise, some have recommend disabling HTML completely. That’s not even an option in any email client I’ve used — you can disable sending HTML emails, but not receiving them. It’s sufficient to just disable grabbing remote content, not the rest of HTML email rendering.

I couldn’t replicate the direct exfiltration

There rare two related bugs. One allows direct exfiltration, which appends the decrypted PGP email onto the end of an IMG tag (like one of those tracking tags), allowing the entire message to be decrypted.

An example of this is the following email. This is a standard HTML email message consisting of multiple parts. The trick is that the IMG tag in the first part starts the URL (blog.robertgraham.com/…) but doesn’t end it. It has the starting quotes in front of the URL but no ending quotes. The ending will in the next chunk.

The next chunk isn’t HTML, though, it’s PGP. The PGP extension (in my case, Enignmail) will detect this and automatically decrypt it. In this case, it’s some previous email message I’ve received the attacker captured by eavesdropping, who then pastes the contents into this email message in order to get it decrypted.

What should happen at this point is that Thunderbird will generate a request (if “remote content” is enabled) to the blog.robertgraham.com server with the decrypted contents of the PGP email appended to it. But that’s not what happens. Instead, I get this:

I am indeed getting weird stuff in the URL (the bit after the GET /), but it’s not the PGP decrypted message. Instead what’s going on is that when Thunderbird puts together a “multipart/mixed” message, it adds it’s own HTML tags consisting of lines between each part. In the email client it looks like this:

The HTML code it adds looks like:

That’s what you see in the above URL, all this code up to the first quotes. Those quotes terminate the quotes in the URL from the first multipart section, causing the rest of the content to be ignored (as far as being sent as part of the URL).

So at least for the latest version of Thunderbird, you are accidentally safe, even if you have “remote content” enabled. Though, this is only according to my tests, there may be a work around to this that hackers could exploit.

STARTTLS

In the old days, email was sent plaintext over the wire so that it could be passively eavesdropped on. Nowadays, most providers send it via “STARTTLS”, which sorta encrypts it. Attackers can still intercept such email, but they have to do so actively, using man-in-the-middle. Such active techniques can be detected if you are careful and look for them.
Some organizations don’t care. Apparently, some nation states are just blocking all STARTTLS and forcing email to be sent unencrypted. Others do care. The NSA will passively sniff all the email they can in nations like Iraq, but they won’t actively intercept STARTTLS messages, for fear of getting caught.
The consequence is that it’s much less likely that somebody has been eavesdropping on you, passively grabbing all your PGP/SMIME emails. If you fear they have been, you should look (e.g. send emails from GMail and see if they are intercepted by sniffing the wire).

You’ll know if you are getting hacked

If somebody attacks you using eFail, you’ll know. You’ll get an email message formatted this way, with multipart/mixed components, some with corrupt HTML, some encrypted via PGP. This means that for the most part, your risk is that you’ll be attacked only once — the hacker will only be able to get one message through and decrypt it before you notice that something is amiss. Though to be fair, they can probably include all the emails they want decrypted as attachments to the single email they sent you, so the risk isn’t necessarily that you’ll only get one decrypted.
As mentioned above, a lot of attackers (e.g. the NSA) won’t attack you if its so easy to get caught. Other attackers, though, like anonymous hackers, don’t care.
Somebody ought to write a plugin to Thunderbird to detect this.

Summary

It only works if attackers have already captured your emails (though, that’s why you use PGP/SMIME in the first place, to guard against that).
It only works if you’ve enabled your email client to automatically grab external/remote content.
It seems to not be easily reproducible in all cases.
Instead of disabling PGP/SMIME, you should make sure your email client hast remote/external content disabled — that’s a huge privacy violation even without this bug.

Notes: The default email client on the Mac enables remote content by default, which is bad:

No, Ray Ozzie hasn’t solved crypto backdoors

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/no-ray-ozzie-hasnt-solved-crypto.html

According to this Wired article, Ray Ozzie may have a solution to the crypto backdoor problem. No, he hasn’t. He’s only solving the part we already know how to solve. He’s deliberately ignoring the stuff we don’t know how to solve. We know how to make backdoors, we just don’t know how to secure them.

The vault doesn’t scale

Yes, Apple has a vault where they’ve successfully protected important keys. No, it doesn’t mean this vault scales. The more people and the more often you have to touch the vault, the less secure it becomes. We are talking thousands of requests per day from 100,000 different law enforcement agencies around the world. We are unlikely to protect this against incompetence and mistakes. We are definitely unable to secure this against deliberate attack.

A good analogy to Ozzie’s solution is LetsEncrypt for getting SSL certificates for your website, which is fairly scalable, using a private key locked in a vault for signing hundreds of thousands of certificates. That this scales seems to validate Ozzie’s proposal.

But at the same time, LetsEncrypt is easily subverted. LetsEncrypt uses DNS to verify your identity. But spoofing DNS is easy, as was recently shown in the recent BGP attack against a cryptocurrency. Attackers can create fraudulent SSL certificates with enough effort. We’ve got other protections against this, such as discovering and revoking the SSL bad certificate, so while damaging, it’s not catastrophic.

But with Ozzie’s scheme, equivalent attacks would be catastrophic, as it would lead to unlocking the phone and stealing all of somebody’s secrets.

In particular, consider what would happen if LetsEncrypt’s certificate was stolen (as Matthew Green points out). The consequence is that this would be detected and mass revocations would occur. If Ozzie’s master key were stolen, nothing would happen. Nobody would know, and evildoers would be able to freely decrypt phones. Ozzie claims his scheme can work because SSL works — but then his scheme includes none of the many protections necessary to make SSL work.

What I’m trying to show here is that in a lab, it all looks nice and pretty, but when attacked at scale, things break down — quickly. We have so much experience with failure at scale that we can judge Ozzie’s scheme as woefully incomplete. It’s not even up to the standard of SSL, and we have a long list of SSL problems.

Cryptography is about people more than math

We have a mathematically pure encryption algorithm called the “One Time Pad”. It can’t ever be broken, provably so with mathematics.

It’s also perfectly useless, as it’s not something humans can use. That’s why we use AES, which is vastly less secure (anything you encrypt today can probably be decrypted in 100 years). AES can be used by humans whereas One Time Pads cannot be. (I learned the fallacy of One Time Pad’s on my grandfather’s knee — he was a WW II codebreaker who broke German messages trying to futz with One Time Pads).

The same is true with Ozzie’s scheme. It focuses on the mathematical model but ignores the human element. We already know how to solve the mathematical problem in a hundred different ways. The part we don’t know how to secure is the human element.

How do we know the law enforcement person is who they say they are? How do we know the “trusted Apple employee” can’t be bribed? How can the law enforcement agent communicate securely with the Apple employee?

You think these things are theoretical, but they aren’t. Consider financial transactions. It used to be common that you could just email your bank/broker to wire funds into an account for such things as buying a house. Hackers have subverted that, intercepting messages, changing account numbers, and stealing millions. Most banks/brokers require additional verification before doing such transfers.

Let me repeat: Ozzie has only solved the part we already know how to solve. He hasn’t addressed these issues that confound us.

We still can’t secure security, much less secure backdoors

We already know how to decrypt iPhones: just wait a year or two for somebody to discover a vulnerability. FBI claims it’s “going dark”, but that’s only for timely decryption of phones. If they are willing to wait a year or two a vulnerability will eventually be found that allows decryption.

That’s what’s happened with the “GrayKey” device that’s been all over the news lately. Apple is fixing it so that it won’t work on new phones, but it works on old phones.

Ozzie’s solution is based on the assumption that iPhones are already secure against things like GrayKey. Like his assumption “if Apple already has a vault for private keys, then we have such vaults for backdoor keys”, Ozzie is saying “if Apple already had secure hardware/software to secure the phone, then we can use the same stuff to secure the backdoors”. But we don’t really have secure vaults and we don’t really have secure hardware/software to secure the phone.

Again, to stress this point, Ozzie is solving the part we already know how to solve, but ignoring the stuff we don’t know how to solve. His solution is insecure for the same reason phones are already insecure.

Locked phones aren’t the problem

Phones are general purpose computers. That means anybody can install an encryption app on the phone regardless of whatever other security the phone might provide. The police are powerless to stop this. Even if they make such encryption crime, then criminals will still use encryption.

That leads to a strange situation that the only data the FBI will be able to decrypt is that of people who believe they are innocent. Those who know they are guilty will install encryption apps like Signal that have no backdoors.

In the past this was rare, as people found learning new apps a barrier. These days, apps like Signal are so easy even drug dealers can figure out how to use them.

We know how to get Apple to give us a backdoor, just pass a law forcing them to. It may look like Ozzie’s scheme, it may be something more secure designed by Apple’s engineers. Sure, it will weaken security on the phone for everyone, but those who truly care will just install Signal. But again we are back to the problem that Ozzie’s solving the problem we know how to solve while ignoring the much larger problem, that of preventing people from installing their own encryption.

The FBI isn’t necessarily the problem

Ozzie phrases his solution in terms of U.S. law enforcement. Well, what about Europe? What about Russia? What about China? What about North Korea?

Technology is borderless. A solution in the United States that allows “legitimate” law enforcement requests will inevitably be used by repressive states for what we believe would be “illegitimate” law enforcement requests.

Ozzie sees himself as the hero helping law enforcement protect 300 million American citizens. He doesn’t see himself what he really is, the villain helping oppress 1.4 billion Chinese, 144 million Russians, and another couple billion living in oppressive governments around the world.

Conclusion

Ozzie pretends the problem is political, that he’s created a solution that appeases both sides. He hasn’t. He’s solved the problem we already know how to solve. He’s ignored all the problems we struggle with, the problems we claim make secure backdoors essentially impossible. I’ve listed some in this post, but there are many more. Any famous person can create a solution that convinces fawning editors at Wired Magazine, but if Ozzie wants to move forward he’s going to have to work harder to appease doubting cryptographers.

OMG The Stupid It Burns

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/omg-stupid-it-burns.html

This article, pointed out by @TheGrugq, is stupid enough that it’s worth rebutting.

The article starts with the question “Why did the lessons of Stuxnet, Wannacry, Heartbleed and Shamoon go unheeded?“. It then proceeds to ignore the lessons of those things.
Some of the actual lessons should be things like how Stuxnet crossed air gaps, how Wannacry spread through flat Windows networking, how Heartbleed comes from technical debt, and how Shamoon furthers state aims by causing damage.
But this article doesn’t cover the technical lessons. Instead, it thinks the lesson should be the moral lesson, that we should take these things more seriously. But that’s stupid. It’s the sort of lesson people teach you that know nothing about the topic. When you have nothing of value to contribute to a topic you can always take the moral high road and criticize everyone for being morally weak for not taking it more seriously. Obviously, since doctors haven’t cured cancer yet, it’s because they don’t take the problem seriously.
The article continues to ignore the lesson of these cyber attacks and instead regales us with a list of military lessons from WW I and WW II. This makes the same flaw that many in the military make, trying to understand cyber through analogies with the real world. It’s not that such lessons could have no value, it’s that this article contains a poor list of them. It seems to consist of a random list of events that appeal to the author rather than events that have bearing on cybersecurity.
Then, in case we don’t get the point, the article bullies us with hyperbole, cliches, buzzwords, bombastic language, famous quotes, and citations. It’s hard to see how most of them actually apply to the text. Rather, it seems like they are included simply because he really really likes them.
The article invests much effort in discussing the buzzword “OODA loop”. Most attacks in cyberspace don’t have one. Instead, attackers flail around, trying lots of random things, overcoming defense with brute-force rather than an understanding of what’s going on. That’s obviously the case with Wannacry: it was an accident, with the perpetrator experimenting with what would happen if they added the ETERNALBLUE exploit to their existing ransomware code. The consequence was beyond anybody’s ability to predict.
You might claim that this is just the first stage, that they’ll loop around, observe Wannacry’s effects, orient themselves, decide, then act upon what they learned. Nope. Wannacry burned the exploit. It’s essentially removed any vulnerable systems from the public Internet, thereby making it impossible to use what they learned. It’s still active a year later, with infected systems behind firewalls busily scanning the Internet so that if you put a new system online that’s vulnerable, it’ll be taken offline within a few hours, before any other evildoer can take advantage of it.
See what I’m doing here? Learning the actual lessons of things like Wannacry? The thing the above article fails to do??
The article has a humorous paragraph on “defense in depth”, misunderstanding the term. To be fair, it’s the cybersecurity industry’s fault: they adopted then redefined the term. That’s why there’s two separate articles on Wikipedia: one for the old military term (as used in this article) and one for the new cybersecurity term.
As used in the cybersecurity industry, “defense in depth” means having multiple layers of security. Many organizations put all their defensive efforts on the perimeter, and none inside a network. The idea of “defense in depth” is to put more defenses inside the network. For example, instead of just one firewall at the edge of the network, put firewalls inside the network to segment different subnetworks from each other, so that a ransomware infection in the customer support computers doesn’t spread to sales and marketing computers.
The article talks about exploiting WiFi chips to bypass the defense in depth measures like browser sandboxes. This is conflating different types of attacks. A WiFi attack is usually considered a local attack, from somebody next to you in bar, rather than a remote attack from a server in Russia. Moreover, far from disproving “defense in depth” such WiFi attacks highlight the need for it. Namely, phones need to be designed so that successful exploitation of other microprocessors (namely, the WiFi, Bluetooth, and cellular baseband chips) can’t directly compromise the host system. In other words, once exploited with “Broadpwn”, a hacker would need to extend the exploit chain with another vulnerability in the hosts Broadcom WiFi driver rather than immediately exploiting a DMA attack across PCIe. This suggests that if PCIe is used to interface to peripherals in the phone that an IOMMU be used, for “defense in depth”.
Cybersecurity is a young field. There are lots of useful things that outsider non-techies can teach us. Lessons from military history would be well-received.
But that’s not this story. Instead, this story is by an outsider telling us we don’t know what we are doing, that they do, and then proceeds to prove they don’t know what they are doing. Their argument is based on a moral suasion and bullying us with what appears on the surface to be intellectual rigor, but which is in fact devoid of anything smart.
My fear, here, is that I’m going to be in a meeting where somebody has read this pretentious garbage, explaining to me why “defense in depth” is wrong and how we need to OODA faster. I’d rather nip this in the bud, pointing out if you found anything interesting from that article, you are wrong.

Notes on setting up Raspberry Pi 3 as WiFi hotspot

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/notes-on-setting-up-raspberry-pi-3-as.html

I want to sniff the packets for IoT devices. There are a number of ways of doing this, but one straightforward mechanism is configuring a “Raspberry Pi 3 B” as a WiFi hotspot, then running tcpdump on it to record all the packets that pass through it. Google gives lots of results on how to do this, but they all demand that you have the precise hardware, WiFi hardware, and software that the authors do, so that’s a pain.

I got it working using the instructions here. There are a few additional notes, which is why I’m writing this blogpost, so I remember them.
https://www.raspberrypi.org/documentation/configuration/wireless/access-point.md

I’m using the RPi-3-B and not the RPi-3-B+, and the latest version of Raspbian at the time of this writing, “Raspbian Stretch Lite 2018-3-13”.

Some things didn’t work as described. The first is that it couldn’t find the package “hostapd”. That solution was to run “apt-get update” a second time.

The second problem was error message about the NAT not working when trying to set the masquerade rule. That’s because the ‘upgrade’ updates the kernel, making the running system out-of-date with the files on the disk. The solution to that is make sure you reboot after upgrading.

Thus, what you do at the start is:

apt-get update
apt-get upgrade
apt-get update
shutdown -r now

Then it’s just “apt-get install tcpdump” and start capturing on wlan0. This will get the non-monitor-mode Ethernet frames, which is what I want.