Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/08/thread-on-osi-model-is-lie.html
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/08/thread-on-network-input-parsers.html
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/08/hacker-jeopardy-wrong-answers-only.html
Among the evening entertainments at DEF CON is “Hacker Jeopardy”, like the TV show Jeopardy, but with hacking tech/culture questions. In today’s blog post, we are going to play the “Wrong Answers Only” version, in which I die upon the hill defending the wrong answer.
YOU’LL LIKELY SHAKE YOUR HEAD WHEN YOU SEE TELNET AVAILABLE, NORMALLY SEEN ON THIS PORT
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/08/securing-devices-for-defcon.html
There’s been much debate whether you should get burner devices for hacking conventions like DEF CON (phones or laptops). A better discussion would be to list those things you should do to secure yourself before going, just in case.
- backup before you go
- update before you go
- correctly locking your devices with full disk encryption
- correctly configuring WiFi
- Bluetooth devices
- Mobile phone vs. Stingrays
Note that a quick google of “disable USB” leads to the wrong device. They are focused on controlling thumbdrives. That’s not really the threat. Instead, the the threat is things like network adapters that will redirect network traffic to/from the device, and enable attacks that you think you are immune to because you aren’t connected to a network.
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/07/why-we-fight-for-crypto.html
This last week, the Attorney General William Barr called for crypto backdoors. His speech is a fair summary of law-enforcement’s side of the argument. In this post, I’m going to address many of his arguments.
The tl;dr version of this blog post is this:
- Their claims of mounting crime are unsubstantiated, based on emotional anecdotes rather than statistics. We live in a Golden Age of Surveillance where, if any balancing is to be done in the privacy vs. security tradeoff, it should be in favor of more privacy.
- But we aren’t talking about tradeoff with privacy, but other rights. In particular, it’s every much as important to protect the rights of political dissidents to keep some communications private (encryption) as it is to allow them to make other communications public (free speech). In addition, there is no solution to their “going dark” problem that doesn’t restrict the freedom to run arbitrary software of the user’s choice on their computers/phones.
- Thirdly, there is the problem of technical feasibility. We don’t know how to make backdoors available for law enforcement access that doesn’t enormously reduce security for users.
The crux of his argument is balancing civil rights vs. safety, also described as privacy vs. security. This balance is expressed in the constitution by the Fourth Amendment. The 4rth doesn’t express an absolute right to privacy, but allows for police to invade your privacy if they can show an independent judge that they have “probable cause”. By making communications “warrant proof”, encryption is creating a “law free zone” enabling crime to be conducted without the ability of the police to investigate.
It’s a reasonable argument. If your child gets kidnapped by sex traffickers, you’ll be demanding the police do something, anything to get your child back safe. If a phone is found at the scene, you’ll definitely want them to have the ability to decrypt the phone, as long as a judge gives them a search warrant to balance civil liberty concerns.
However, this argument is wrong, as I’ll discuss below.
Law free zones
Barr claims encryption creates a new “law free zone … giving criminals the means to operate free of lawful scrutiny”. He pretends that such zones never existed before.
Of course they’ve existed before. Attorney-client privilege is one example, which is definitely abused to further crime. Barr’s own boss has committed obstruction of justice, hiding behind the law-free zone of Article II of the constitution. We are surrounded by legal loopholes that criminals exploit in order to commit crimes, where the cost of closing the loophole is greater than the benefit.
The biggest “law free zone” that exists is just the fact that we don’t live in a universal surveillance state. I think impure thoughts without the police being able to read my mind. I can whisper quietly in your ear at a bar without the government overhearing. I can invite you over to my house to plot nefarious deeds in my living room.
Technology didn’t create these zones. However, technological advances are allowing police to defeat them.
Business’s have security cameras everywhere. Neighborhood associations are installing license plate readers. We are putting Echo/OkGoogle/Cortana/Siri devices in our homes listening to us. Our phones and computers have microphones and cameras. Our TV’s increasingly have cameras and mics, too, in case we want to use them for video conferencing, or give them voice commands.
Every argument Barr makes about crypto backdoors applies to backdoor access to microphones, every arguments applies to forcing TVs to have a backdoor allowing police armed with a warrant to turn on the camera in your living room. These are all law-free zones that can be fixed with backdoors. As long as the police get a warrant issued upon probable cause, every such invasion of privacy is justified in their logic.
I mention your TV specifically because this what George Orwell portrays in his book 1984. The book’s opening is about Winston Smith using the “law free zone” of the small alcove in his living room that’s just outside the TV camera pickup, allowing his seditious writing in a diary. This was supposed to be fanciful fiction of something that would never happen in the future, but it’s exactly what’s happening now.
Law free zones already exist because we don’t live in a surveillance state. Yes we want police to stop crime, but not so much that we want to wear a collar around our neck recording everything we say, tracking our every movement with GPS. Barr’s description of the problem is a pretense that technology created such zones, when the reality is that technology created a way to invade such zones. He’s not asking to restore a balance, but is instead asking for unbalanced universal surveillance. Every one of his arguments for crypto backdoors apply to these other backdoors as well.
The phone company
Barr makes the point that we regularly mandate companies to change their products in the public interest, and that’s all he’s asking for here. But that’s not what he’s asking.
Historically, telecommunications (the plain old telephone system) was managed by the government as a utility in the public interest. The government would frequently regulate the balance of competing interests. From this point of view, the above legal argument makes a lot of sense — all that law enforcement is asking for is this sort of balance.
However, the Internet is not that sort of public utility. What makes the Internet different than the older phone system is the “end-to-end principle”, first expressed in the 1970s. In the old days, the phone company was responsible for the apps you ran on your devices. With the Internet, the phone company no longer does apps, but only transmits bits. End-to-end encryption is integrated with the apps, not with the phone service.
|Scene from 2001 A Space Oddyssy|
Consider pre-Internet sci-fi. They frequently showed people making video phone calls and being charged an absurdly (for the time) low price of only $1.70 by the phone company.
But that’s not how things have turned out. The phone company has no video phones. AT&T does not charge you for making a video phone call on their network. Moreover, $1.70 is an absurdly high price. I frequently make 1080p hi-def video calls to Japan and it costs nothing.
Barr’s speech talks about a Mexican drug cartel using WhatsApp’s end-to-end encryption to defeat wiretaps when planning the murders of politicians. That’s an app by Facebook, one of the top 5 corporations in the world, and something easy for governments to regulate. However, WhatsApp’s end-to-end technology is based on Signal, which is free software controlled by nobody. If Barr succeeds in backdooring WhatsApp then all that means is drug cartels will switch to Signal.
At this point, no amount of regulating corporations will fix the problem. Signal is what’s known as “open-source”. Anybody can download it for free, either that specific version, or their own version with any features removed.
To regulate this, government will have to instead regulate individuals not corporations or public utilities. They would have to ban unlicensed software that people create themselves. App stores, like that from Apple, would include government review of what’s legal or not. Jailbreaking or installing software outside an app store would be illegal.
In other words, we aren’t talking about a slight rebalancing by regulating Facebook, we are talking about an enormously unbalanced cyber dystopia taking away a fundamental right of the people to run software on their computers that they write themselves. Signal is no harder to use than WhatsApp. It’s absurd thinking Mexican drug cartels wouldn’t just switch to Signal if WhatsApp were backdoored.
Barr pretends the balance is expressed in the Fourth Amendment, but from this perspective, it’s the Third Amendment that’s important. That’s the one forbidding quartering troops in our homes. Barr describes CALEA requiring telephone switches to allow wiretaps. But that’s regulating a public utility, which in colonial times, would be akin to the streets, sewers, or water supply. What backdoors demand doesn’t affect the utilities, but the phones in our hands, owned by us and not the utility. Barr demands that we, the consumers, can no longer choose what software we run on the device. We must instead “quarter” government software on our personal devices.
I’m glad Barr brings up Mexican drug cartels using WhatsApp to evade wiretaps to murder and pillage. It sounds like a convincing argument for his side, because it means only small regulation of Facebook to achieve the goal. But since the cartels would obviously switch to Signal in response, we are confronted with what crypto backdoors really mean: a massive overhaul of human rights.
The world is end-to-end. That’s the design of the Internet protocol from the 1970s that makes it different from the phone company. It’s the design of crypto today. There is no way for Barr to achieve “balance” without destruction of this basic principle.
Two tier crypto
Barr claims that consumers don’t need strong crypto. After all, consumers are just protecting messages to friends, not nuclear launch codes.
This is fallacy well known to cryptographers, the belief in two tiers of encryption, a “consumer level” and “military grade”, that one is weaker than the other. This is a cliche people learn from watching too much TV. Such tiers don’t exist.
20 years ago our government tried to weaken crypto by limiting keys to 40-bits for export to the rest of the world, while allowing 128-bits for U.S. citizens. That was their way then for retaining their ability to spy on Mexican drug cartels while protecting citizens. It’s an excellent analogy for explaining why there’s no such thing as two tiers of crypto.
People’s intuition is to treat breaking encryption as linear, that it’s just a matter of trying a little bit harder to break it. You see this in TVs and movies where the hacker just types twice as hard on the keyboard and bypass the encryptions.
But breaking crypto is in fact exponential. Twice as much effort is insignificant.
Take those export controlled 40-bit keys mentioned above. People imagine that 80-bit keys are twice as secure. That’s not true, they are a trillion times more secure. A key that’s twice as secure is 41-bits — each additional bit on a key doubles the number of possible combinations an adversary would have to try in order to crack it. 10 extra bits is a thousand times, 20 bits a million times, 40 bits a million million (trillion) times.
Let’s do some math. A popular hobbyist computer right now is the $35 Raspberry Pi. Let’s compare that to the power of a full $1000 desktop computer, and to the NSA buying a million desktop computers with a billion dollars. What size keys can each crack? You’d think that a billion dollars somehow grants near infinite powers vs. the RPi, but it doesn’t. A factor of 10 million means adding 23 bits to the length of the key that can be cracked.
This is shown the graph below. The y-axis is the number of nanoseconds it takes to crack a key, the x-axis is key length. As you see, this isn’t a linear graph where difficult slowly rises as keys get longer. Instead, it’s an exponential graph, where as keys get longer, the time it takes to crack them goes from nearly zero to nearly infinitely. In other words, because of exponential growth, keys are largely either easily cracked or impossible to crack, with only a fine line between the two extremes.
An RPi can crack any encryption key smaller than 45-bits almost instantly. The NSA, with a billion dollars worth of computers, still can’t crack 70-bit keys. Even if you were to try to create a key somewhere in the middle, such as 64-bits, it still wouldn’t work, because a hacker could still buy a day’s worth of cloud computing, temporarily creating an NSA-level computer, to crack that one key.
The government’s “export grade” crypto is thus nonsense, as 40-bit encryption means essentially no encryption. Conversely, 128-bit encryption means perfect encryption. The TV cliche of showing a hacker working harder to bypass the encryptions is not reality. If the encryption works, it works against all adversaries. If it doesn’t work, then it doesn’t work against any adversary. Crypto is either broken by your neighbor’s teenager who bought a computer from their babysitting money, or is perfect defense against the NSA’s billions.
In fact, military grade means worse encryption. Military equipment takes years negotiating purchasing contracts and then must last in the field for decades. It’s woefully out-of-date. In contrast, your iPhone contains the latest developments in crypto. The picture of your pet you just texted your friend uses crypto vastly better than what’s protecting our launch codes. That military needs better crypto is a fallacy.
This picture is from a nuclear missile silo, where they still use floppy disks.
This picture is of Fritzi, sent by my sister via Apple’s iMessage, which uses the latest advances in end-to-end encryption.
Barr repeats this fallacy in another way, talking about “customized encryption used by large business enterprises to protect their operations”. Again, that’s not a thing. Customized encryption is always worst encryption. The best encryption is the standard, non-customized encryption that consumers use. When you customize it, you start making mistakes.
The government isn’t calling for 40-bit export crypto anymore, but is calling for other weaknesses. Therefore, this discussion of math is only an analogy.
But the underlying concept still applies. Cryptographers don’t know how slightly weak crypto that’s only 99% secure instead of 100% secure, because any small weakness inevitably gets hacked into an enormous gaping hole.
Barr derides our concerns as being only “theory”, but it’s theory backed up my a lot of experience. It’s like asking your doctor to prove that losing weight and exercising will improve your health. Our experience from cryptography is that there is no such things as a little bit weak. We know of no way to implement the government’s backdoor in such a way that won’t have grave impacts. I might not be able to immediately point out the holes in whatever scheme you have concocted, but that doesn’t mean I believe your backdoor scheme doesn’t have weaknesses. My decades of experience tells me it’s only a matter of time before those weaknesses explode into gapping holes that hackers exploit.
Barr doesn’t care about whether backdoors are technically feasible. His argument is ultimately premised on the idea that citizen’s don’t have a fundamental right to protect themselves, but that instead they should rely upon the government to protect them. They should not take the law into their own hands. That backdoors weaken their ability to secure themselves is therefore not a problem. This is bad: we should have the right to protect ourselves, and crypto backdoors hugely impacts that right.
Barr claims the costs aren’t abstract, but measured in real mounting victims. This is an excellent argument to pay attention to. Because the real numbers don’t support him.
The fact is that the number of crimes perpetrated is not going up. The rate of solving crimes and prosecuting perpetrators is not going down. If there were a wave of unsolved crime, then even I admit we’d have to seriously start talking about this issue. I might not support backdoors, because (as described above) they aren’t technically feasible without violation human rights. But I’d be much more motivated to look for alternatives.
But there is no wave of unsolved crime. All that’s mounting is the number of locked phones in their evidence rooms. They are solving crimes because they have all the same old crime fighting abilities available to them that don’t require unlocking phones.
Crime rates have been falling as strong crypto has increased.
The “clearance rate” rate is not changing, due to strong crypto or any other reason:
By Barr’s own arguments, then, crypto backdoors aren’t justified.
So if the issue isn’t “crime”, what it is? My guess is that the answer is “power”. They have evidence rooms full of phones without the power to decrypt them. That makes them unhappy.
Before we accept the government’s call for more dystopic police power, we should demand that they prove that encryption is actually leading to more crime. This should be based on statistics, not anecdotes like Mexican drug cartels or kidnapped girls, arguments designed to appeal to emotion not logic.
China, Russia, and Jefferson
The argument of balance is often described by “right to privacy” balanced with “right to safety/security”. I don’t think this is correct. I don’t think people care that much about privacy. After all, they readily give up privacy to the Facebook, the Google, and the other companies. It seems reasonable that they should be just as ready to give up privacy in exchange for additional security provided by law enforcement to protect us against criminals.
Instead, the balance people care about is the abuse of power by the government. The balance is between security provided by the government vs. security threats coming from the government. It’s a balance of security vs. security.
Another way of looking at it is that privacy isn’t a monolithic, there are different kinds of privacy. I don’t care (much) if government employees spy on me while I’m naked in the shower, as there’s not much they can do to abuse this information. I care a lot about them being able to secretly turn on the microphone in my living room and eavesdrop on my private conversation, because I know that’s exactly the sort of power government’s are known to abuse.
A quote often attributed to Edward Snowden is:
“Saying you don’t care about privacy because you have nothing to hide is like saying you don’t care about free speech because you have nothing to say.”
Is this comparison really valid? Or is it a false equivalence?
China and Russia show us the answer to this question. Both have cracked down on encrypted communications. China mandates devices have a backdoor whereby the government can access anything on a phone, encrypted or not. Russia has cracked down on Telegram, an encrypted messaging app popular in Russia. Both cases have been motivated by their desire to crack down on dissidents.
We therefore see that these are equivalent in Snowden’s quote. For dissidents in China and Russia, it’s every much as important for them to keep some communications private (i.e. encrypted) as it is to allow other communications to be public (i.e. free speech).
Thus, the debate isn’t whether the U.S. government should have this power, but whether governments in general should have this power. If it were only the U.S., we might trust them with backdoors, because the U.S. is a free country and not a totalitarian state. But that’s the same as saying that we trust our current government to regulate speech because they’d never restrict political speech the way they do in China and Russia.
We have a free society because government has these restrictions. If you remove restrictions because you trust government, that’ll only lead to a government that abuses its power.
It’s obvious the U.S. government abuses any power we give it. Take “civil asset forfeiture” as an example. Sure, it seems reasonable that if convicted of a crime, you should have to forfeit the proceeds of the crime. But it’s gotten out of control. If the police pull you over, and see that you have $5000, they can just seize it, without convicting you of a crime, without even charging you with a crime. The Supreme Court allows it under weird legal fictions, pretending they aren’t depriving you of property without due process of law. It’s not you who is charged with a crime, but the object they are confiscating, which is why you see odd court cases with names like “United States v. $124,700 in U.S. Currency“.
Or take the “border search exception”. Obviously, if you are going to control the borders, it’s reasonable to search people’s belongings for smuggled goods, searches that would be unreasonable when not done on the border. However, this power is now abused for things like searching your phone. This is unreasonable, because there’s nothing you’d smuggle on the phone when crossing the border that you couldn’t more easily transfer over the Internet. Yet, the Supreme Court allows it under various legal fictions.
Or take the Snowden revelations about the government grabbing all phone call records of the past 7 years without a warrant. Or the fact they grab all your financial records the same way, including every credit card purchase. Or how they used to grab all your cell phone GPS records until finally the courts added a warrant requirement in the recent Carpenter decision (though only in limited cases).
Or take the drug war in general. Barr mentions drug traffickers numerous times to justify himself. But the war on drugs has been an enormous abuse by our government. Because of the drug war, our incarceration rate has exploded. At 655 per 100,000, we are the highest in the world. In European countries, that number is around 100. In Japan, it’s 41. I’m glad Barr focuses on drug traffickers — the war on drugs has resulted in obvious government abuse of power. Drug crimes aren’t a reason to give government more power, but a reason to give them less. That’s why our country is legalizing marijuana right now, because it’s less harmful than alcohol or tobacco and therefore stupid to keep illegal, while at the same time fostering abusive government power.
Barr cites several Supreme Court cases to justify his legal position. But “legal” doesn’t mean “right”. Our entire country is based upon legal actions by the English government that the colonists decided were nonetheless illegitimate. Just because the Supreme Court allows something as “constitutional” doesn’t mean it’s not one of those abuses and usurpations designed to reduce citizens under absolute despotism. The fact that that the Supreme Court seems unable or unwilling to curb current abuses of our government, especially with regards to technological change, isn’t an argument that crypto backdoors are justified (as Barr argues), but an argument why we need to vigorously oppose them.
The Ninth Amendment says that “the enumeration of certain rights show not be construed to deny or disparage others retained by the people”. But that’s exactly what Barr did in his speech, talking about “Supreme Court taking steps to ensure that advances in technology do not unduly tip the scales against public safety”. The Supreme Court can do no such thing. The right to encrypted communications, or the right to run whatever software you want on your computer, is not enumerated in the Constitution and thus the Supreme Court can never consider them. It can never balance these rights with against public safety. But they are important rights nonetheless.
Yes, it’s reasonable to balance privacy and security — but we aren’t talking about privacy in general. The changes in technology has demonstrated that encrypted communications is it’s own thing, separate from our other privacy concerns.
So now let’s go back and revisit what sounds like a reasonable argument that the Fourth Amendment balances privacy and security.
There is no evidence of an imbalance. Crime rates aren’t increasing, clearance rates (of solving crimes) aren’t decreasing. Far from “going dark”, we live in a Golden Age of Surveillance, were police are able to grab our GPS records, credit card receipts, phone metadata, and other records, often without a warrant. It’s impractical to travel anonymously in the United States, as the government gets a copy of plane and train records, and is increasingly blanketing the country with license plate readers to track our cars. If a rebalancing of the “privacy vs. security” equation is needed, it’s in favor of privacy.
But we aren’t talking about that balance. We are instead balancing “security vs. security”. It has become obvious that privacy of security communications is a wholly separate concern from other privacy issues. Even though we rely upon government to provide for public safety, we are in danger from governments that abuse their power to repress citizens. It is every much as important for political dissidents that we protect private communications (with encryption) as we protect their right to public communications (free speech).
Thirdly, we have purely technical problems. Cryptographers tell us, convincingly, that there’s no such thing in cryptography as a difference between consumer-grade security and military-grade security. Any backdoor in security for law enforcement compromises the ability of citizens to protect themselves. Similarly, it’s not about regulating the products/services big corporations like AT&T or Facebook put in our hands. Instead, it’s about regulating the software we ourselves choose to install on our devices. There is no solution to Barr’s scenarios that doesn’t involve outlawing such software, removing the right of citizens to install their own software.
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/06/censorship-vs-memes.html
- you can’t yell fire in a crowded movie theater
- but this speech is harmful
- Karl Popper’s Paradox of Tolerance
- censorship/free-speech don’t apply to private organizations
- Twitter blocks and free speech
In other words, how can it be used to justify the thing you are trying to censor and yet be an invalid justification for censoring those things (like draft protests) you don’t want censored?
What this phrase actually means is that because it’s okay to suppress one type of speech, it justifies censoring any speech you want. Which means all censorship is valid. If that’s what you believe, just come out and say “all censorship is valid”.
But this speech is harmful or invalid
That’s what everyone says. In the history of censorship, nobody has ever wanted to censor good speech, only speech they claimed was objectively bad, invalid, unreasonable, malicious, or otherwise harmful
It’s just that everybody has different definitions of what, actually is bad, harmful, or invalid. It’s like the movie theater quote. For example, China’s constitution proclaims freedom of speech, yet the government blocks all mention of the Tienanmen Square massacre because it’s harmful. It’s “Great Firewall of China” is famous for blocking most of the content of the Internet that the government claims harms its citizens.
I put some photos of the Tiananmen anniversary mass vigil in #Hongkong last night onto Wechat and my account has been suspended for “spreading malicious rumours”. The #China of today… pic.twitter.com/F6e2exsgGE
— Stephen McDonell (@StephenMcDonell) June 5, 2019
At least in case of movie theaters, the harm of shouting “fire” is immediate and direct. In all these other cases, the harm is many steps removed. Many want to censor anti-vaxxers, because their speech kills children. But the speech doesn’t, the virus does. By extension, those not getting vaccinations may harm people by getting infected and passing the disease on. But the speech itself is many steps removed from this, and there’s plenty of opportunity to counter this bad speech with good speech.
Thus, this argument becomes that all speech can be censored, because I can also argue that some harm will come from it.
Karl Popper’s Paradox of Tolerance
This is just a logical fallacy, using different definitions of “tolerance”. The word means “putting up with those who disagree with you”. The “paradox” comes from allowing people free-speech who want to restrict free-speech.
But people are shifting the definition of “tolerance” to refer to white-supremacists, homophobes, and misogynists. That’s also intolerance, of people different than you, but it’s not the same intolerance Popper is talking about. It’s not a paradox allowing the free-speech of homophobes, because they aren’t trying to restrict anybody else’s free-speech.
Today’s white-supremacists in the United States don’t oppose free-speech, quite the opposite. They champion free-speech, and complain the most about restrictions on their speech. Popper’s Paradox doesn’t apply to them. Sure, the old Nazi’s in Germany also restricted free-speech, but that’s distinct from their racism, and not what modern neo-Nazi’s are championing.
Ironically, the intolerant people Popper refers to in his Paradox are precisely the ones quoting it with the goal of restricting speech. Sure, you may be tolerant in every other respect (foreigners, other races, other religions, gays, etc.), but if you want to censor free-speech, you are intolerant of people who disagree with you. Popper wasn’t an advocate of censorship, his paradox wasn’t an excuse to censor people. He believed that “diversity of opinions must never be interfered with”.
Censorship doesn’t apply to private organizations
Free speech rights, as enumerated by the First Amendment, only apply to government. Therefore, it’s wrong to claim the First Amendment protects your Twitter or Facebook post, because those are private organizations. The First Amendment doesn’t apply to private organizations. Indeed, the First Amendment means that government can’t force Twitter or Facebook to stop censoring you.
But “free speech” doesn’t always mean “First Amendment rights”. Censorship by private organizations is still objectionable on “free speech” grounds. Private censorship by social media isn’t suddenly acceptable simply because government isn’t involved.
Our rights derive from underlying values of tolerance and pluralism. We value the fact that even those who disagree with us can speak freely. The word “censorship” applies both to government and private organizations, because both can impact those values, both can restrict our ability to speak.
Private organizations can moderate content without it being “censorship”. On the same page where Wikipedia states that it won’t censor even “exceedingly objectionable/offensive” content, it also says:
Wikipedia is free and open, but restricts both freedom and openness where they interfere with creating an encyclopedia.
In other words, it will delete content that doesn’t fit its goals of creating an encyclopedia, but won’t delete good encyclopedic content just because it’s objectionable. The first isn’t censorship, the second is. It’s not “censorship” when the private organization is trying to meet its goals, whatever they are. It’s “censorship” when outsiders pressure/coerce the organization into removing content they object to that otherwise meets the organization’s goals.
Another way of describing the difference is the recent demonetization of Steven Crowder’s channel by YouTube. People claim YouTube should’ve acted earlier, but didn’t because they are greedy. This argument demonstrates their intolerance. They aren’t arguing that YouTube should remove content in order to achieve its goals of making money. They are arguing that YouTube should remove content they object to, despite hurting the goal of making money. The first wouldn’t be censorship, the second most definitely is.
So let’s say you are a podcaster. Certainly, don’t invite somebody like Crowder on your show, for whatever reason you want. That’s not censorship. Let’s say you do invite him on your show, and then people complain. That’s also not censorship, because people should speak out against things they don’t like. But now let’s say that people pressure/coerce you into removing Crowder, who aren’t listeners to your show anyway, just because they don’t want anybody to hear what Crowder has to say. That’s censorship.
That’s what happened recently with Mark Hurd, a congressman from Texas who has sponsored cybersecurity legislation, who was invited to speak at Black Hat, a cybersecurity conference. Many people who disliked his non-cybersecurity politics objected and pressured Black Hat into dis-inviting him. That’s censorship. It’s one side who refuse to tolerate a politician of the opposing side.
All these arguments about public vs. private censorship are repeats of those made for decades. You can see them here in this TV show (WKRP in Cincinati) about Christian groups trying to censor obscene song lyrics, which was a big thing in the 1980s.
This section has so far been about social media, but the same applies to private individuals. When terrorists (private individuals) killed half the staff at Charlie Hebdo for making cartoons featuring Muhamed, everyone agreed this was a freedom of speech issue. When South Park was censored due to threats from Islamic terrorists, people likewise claimed it was a free speech issue.
In Russia, the police rarely arrests journalists. Instead, youth groups and thugs beat them up. Russia has one of the worst track records on freedom of speech, but it’s mostly private individuals who are responsible, not their government.
These days in America, people justify Antifa’s militancy, which tries to restrict the free speech of those they label as “fascists”, because it’s not government restrictions. It’s just private individuals attacking other private individuals. It’s no more justified than any of these other violence attacks on speech.
Twitter blocks and free speech
The previous parts are old memes. There’s a new meme, that somehow Twitter “blocks” are related to free-speech.
That’s nonsense. If I block you on Twitter, then the only speech I’m preventing you from seeing is my own. It also prevents me from seeing some (but not all) stuff you post, but again, the only one affected by this block is me. It doesn’t stop others from seeing your content. Censorship is about stopping others from hearing speech that I object to. If there’s no others involved, it’s not censorship. In particular, while you are free to speak anything you want, I’m likewise free to ignore you.
Sure, there are separate concerns when the President simultaneously uses his Twitter account for official business and also blocks people. That’s a can of worms that I don’t want to get into. But it doesn’t apply to us individuals.
The pro-censorship arguments people are making today are the same arguments people have been making for thousands of years, such as when ancient Rome had the office of “censor” who (among other duties) was tasked with restricting harmful speech. Those arguing for censorship of speech they don’t like believe that somehow their arguments are different. They aren’t. It’s the same bankrupt memes made over and over.
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/05/some-raspberry-pi-compatible-computers.html
I noticed this spreadsheet over at r/raspberry_pi reddit. I thought I’d write up some additional notes.
Consider the Upboard, an x86 computer in the Raspberry Pi form factor for $99. When you include storage, power supplies, heatsinks, cases, and so on, it’s actually pretty competitive. It’s not ARM, so many things built for the Raspberry Pi won’t necessarily work. But on the other hand, most of the software built for the Raspberry Pi was originally developed for x86 anyway, so sometimes it’ll work better.
Consider the quasi-RPi boards that support the same GPIO headers, but in a form factor that’s not the same as a RPi. A good example would be the ODroid-N2. These aren’t listed in the above spreadsheet, but there’s a tone of them. There’s only two Nano Pi’s listed in the spreadsheet having the same form factor as the RPi, but there’s around 20 different actual boards with all sorts of different form factors and capabilities.
Consider the heatsink, which can make a big difference in the performance and stability of the board. You can put a small heatsink on any board, but you really need larger heatsinks and possibly fans. Some boards, like the ODroid-C2, come with a nice large heatsink. Other boards have a custom designed large heatsink you can purchase along with the board for around $10. The Raspberry Pi, of course, has numerous third party heatsinks available. Whether or not there’s a nice large heatsink available is an important buying criteria. That spreadsheet should have a column for “Large Heatsink”, whether one is “Integrated” or “Available”.
Consider power consumption and heat dissipation as a buying criteria. Uniquely among the competing devices, the Raspberry Pi itself uses a CPU fabbed on a 40nm process, whereas most of the competitors use 28nm or even 14nm. That means it consumes more power and produces more heat than any of it’s competitors, by a large margin. The Intel Atom CPU mentioned above is actually one of the most power efficient, being fabbed on a 14nm process. Ideally, that spreadsheet would have tow additional columns for power consumption (and hence heat production) at “Idle” and “Load”.
You shouldn’t really care about CPU speed. But if you are, there basically two classes of speed: in-order and out-of-order. For the same GHz, out-of-order CPUs are roughly twice as fast as in-order. The Cortex A5, A7, and A53 are in-order. The Cortex A17, A72, and A73 (and Intel Atom) are out-of-order. The spreadsheet also lists some NXP i.MX series processors, but those are actually ARM Cortex designs. I don’t know which, though.
The spreadsheet lists memory, like LPDDR3 or DDR4, but it’s unclear as to speed. There’s two things that determine speed, the number of MHz/GHz and the width, typically either 32-bits or 64-bits. By “64-bits” we can mean a single channel that’s 64-bits wide, as in the case of the Intel Atom processors, or two channels that are each 32-bits wide, as in the case of some ARM processors. The Raspberry Pi has an incredibly anemic 32-bit 400-MHz memory, whereas some competitors have 64-bit 1600-MHz memory, or roughly 8 times the speed. For CPU-bound tasks, this isn’t so important, but a lot of tasks are in fact bound by memory speed.
As for GPUs, most are not OpenCL programmable, but some are. The VideoCore and Mali 4xx (Utgard) GPUs are not programmable. The Mali Txxx (Midgard) are programmable. The “MP2” suffix means two GPU processors, whereas “MP4” means four GPU processors. For a lot of tasks, such as “SDR” (software defined radio), offloading onto GPU simultaneously reduces power consumption (by a lot) while increasing speed (usually 2 to 4 times).
Micro-USB is horrible for a power supply, which is why most of the competing devices either have an option for “barrel” connector or make that required. In other words, don’t think to yourself that micro-USB is adequate just because that’s the only option on the Raspberry Pi, that barrel connectors are more common among the competitors should convince you that micro-USB isn’t adequate. You can actually buy USB cables with barrel connectors cheaply from Amzon.com, so it doesn’t make much of a difference. I mention this because I hook mine up to the “multiport chargers” so don’t want a separate wall-wart power supply that you normally would need if using the barrel connector — I want just the USB cable instead.
Likewise, most of the competing devices offer eMMC built-in or as an option. This should convince you that booting micro-SD cards is not adequate. There is no way to turn off the Raspberry Pi without risking corrupting the SD card. It’s also a lot faster, sometimes 10x faster. However, I use a $10 USB-to-SATA connector and a $20 SATA drive on my RPi to boot the operating system through the USB port, so it’s not like this deficiency can’t be gotten around.
The spreadsheet lists which operating systems the device is compatible with. In this, the Raspberry Pi shines, compatible with almost any of them. However, lurking underneath this list is which kernel version the operating systems might use. A good example is the ODroid-C2, which has a newer distribution of Ubuntu 18 userland utilities, but is stuck on the ancient and crusty 3.19 version fo the kernel from February 2015 — over four years old.
The spreadsheet lists which devices include support for infrared. This is presumably to indicate how well it can be integrated into a home entertainment setup. However, it should also list which ones support the CEC channel on HDMI. This allows the various devices to control and be controlled from each other. If you want to change the channel with your RPi processing voice commands via microphone, then you want CEC supporter rather than infrared support. The RPi has good CEC support, but I don’t know about the other devices.
Because there is so much support for the Raspberry Pi, it’s hard not to choose that platform. Things just tend to work. If you are doing maker projects, then get an RPi Model B+.
But it’s deficient in almost every way to its competitors, especially with the amount of power/heat it consumes.
For home server needs, I’m using a ROCK64 at the moment. It consumes much less power, has real gigabit Ethernet, costs only $25, and has USB 3.0 for much faster access to SSDs. It doesn’t have WiFi, but if it wanted that in a server, I’d probably go with the Pine H54-B.
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/05/your-threat-model-is-wrong.html
Several subjects have come up with the past week that all come down to the same thing: your threat model is wrong. Instead of addressing the the threat that exists, you’ve morphed the threat into something else that you’d rather deal with, or which is easier to understand.
An example is this question that misunderstands the threat of “phishing”:
Should failing multiple phishing tests be grounds for firing? I ran into a guy at a recent conference, said his employer fired people for repeatedly falling for (simulated) phishing attacks. I talked to experts, who weren’t wild about this disincentive. https://t.co/eRYPZ9qkzB pic.twitter.com/Q1aqCmkrWL
— briankrebs (@briankrebs) May 29, 2019
Recently, my university sent me an email for mandatory Title IX training, not digitally signed, with an external link to the training, that requested my university login creds for access, that was sent from an external address but from the Title IX coordinator.
— Tyler Pieron (@tyler_pieron) May 29, 2019
- Windows vulns
- something else exposed to the public Internet
- automatic updates of a popular product
- Low grade infection of individual desktops, probably from phishing, which the IT department regularly cleans up with out too much problem.
- Crippling infections of the entire network that spreads via Windows networking credentials (often using ‘psexec’).
$600M+ @ Merck. $600M+ @ FedEx to name but two victims out of 200,000+ who’ve been hit with attacks using two of the stolen NSA tools.
— Nicole Perlroth (@nicoleperlroth) May 29, 2019
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/05/almost-one-million-vulnerable-to.html
Microsoft announced a vulnerability in it’s “Remote Desktop” product that can lead to robust, wormable exploits. I scanned the Internet to assess the danger. I find nearly 1-million devices on the public Internet that are vulnerable to the bug. That means when the worm hits, it’ll likely compromise those million devices. This will likely lead to an event as damaging as WannaCry and notPetya from 2017 — potentially worse, as hackers have since honed their skills exploiting these things for ransomware and other nastiness.
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/05/a-lesson-in-journalism-vs-cybersecurity.html
A recent NYTimes article blaming the NSA for a ransomware attack on Baltimore is typical bad journalism. It’s an op-ed masquerading as a news article. It cites many to support the conclusion the NSA is to be blamed, but only a single quote, from the NSA director, from the opposing side. Yet many experts oppose this conclusion, such as @dave_maynor, @beauwoods, @daveaitel, @riskybusiness, @shpantzer, @todb, @hrbrmstr , … It’s not as if these people are hard to find, it’s that the story’s authors didn’t look.
The main reason experts disagree is that the NSA’s Eternalblue isn’t actually responsible for most ransomware infections. It’s almost never used to start the initial infection — that’s almost always phishing or website vulns. Once inside, it’s almost never used to spread laterally — that’s almost always done with windows networking and stolen credentials. Yes, ransomware increasingly includes Eternalblue as part of their arsenal of attacks, but this doesn’t mean Eternalblue is responsible for ransomware.
The NYTimes story takes extraordinary effort to jump around this fact, deliberately misleading the reader to conflate one with the other. A good example is this paragraph:
That link is a warning from last July about the “Emotet” ransomware and makes no mention of EternalBlue. Instead, the story is citing anonymous researchers claiming that EthernalBlue has been added to Emotet since after that DHS warning.
Who are these anonymous researchers? The NYTimes article doesn’t say. This is bad journalism. The principles of journalism are that you are supposed to attribute where you got such information, so that the reader can verify for themselves whether the information is true or false, or at least, credible.
And in this case, it’s probably false. The likely source for that claim is this article from Malwarebytes about Emotet. They have since retracted this claim, as the latest version of their article points out.
In any event, the NYTimes article claims that Emotet is now “relying” on the NSA’s EternalBlue to spread. That’s not the same thing as “using“, not even close. Yes, lots of ransomware has been updated to also use Eternalblue to spread. However, what ransomware is relying upon is still the Windows-networking/credential-stealing/psexec method. Because the actual source of this quote is anonymous, we the reader have no way of challenging what appears to be a gross exaggeration. The reader is lead to believe the NSA’s EternalBlue is primarily to blame for ransomware spread, rather than the truth that it’s only occasionally responsible.
Likewise, anonymous experts claim that without EternalBlue, “the damage would not have been so vast”:
Again, I want to know who those experts are, and whether this is a fair quote of what they said. What makes ransomware damage “vast” is almost entirely whether it can spread via Windows networking with admin privileges. For the most part, ransomware attacks are binary. They are either harmless, infecting a few desktop computers via a phishing attack, which IT cleans up without trouble. Or, the ransomware gains Doman Admin privileges, then spreads through the entire network via Windows-networking/psexec, which destroys the entire network as we saw in attacks like those in Baltimore and Atlanta.
Yes, it’s true, EternalBlue does make devastating attacks more likely. It’s not for nothing that hackers are including it in their malware. It’s certainly possible that EternalBlue was the thing responsible here, that without it, the “RobinHood” infection might not have spread to the Domain Controllers — and then to the rest of the network via psexec. But the article does not claim this. It’s not citing specific evidence of this fact that we can challenge, but is handwaving over the entire problem, talking in vague generalities that we can’t challenge.
Instead of blaming the NSA, the blame resides with the hackers themselves, or the city of Baltimore for irresponsible management. Yes, there’s good reason to heap some of the blame on the NSA for the WannaCry and notPetya attacks from two years ago, but it’s absurd blaming them now. Windows is a system that needs regular patches. Going two years without a patch is gross malfeasance that’s hard to lay at the NSA’s feet. If what experts believe is implausible, that Baltimore was indeed devastated by the NSA’s EternalBlue, then Baltimore has only themselves to blame for not patching for two years.
Had the NSA done the opposite thing, notified Microsoft of the vuln instead of exploiting it, then Microsoft would’ve released a patch for it. In such cases, hackers get around to writing exploits anyway. They likely would not have in quick time frame of WannaCry and notPetya that came only a couple months after EternalBlue was first disclosed. But they certainly would have within 2 years years. We’ve seen that with many other bugs where only patches were released. The “Conficker” bug in Windows is still being used 10 years after it the patch was released, and hacker’s independently figured out how to exploit it.
In other words, if EternalBlue is responsible for the Baltimore ransomware attack, it would’ve been regardless whether the NSA had weaponized an exploit for done the “responsible” thing and worked with Microsoft to patch it. After two years, exploits would exist either way.
Indeed, the exploit the hackers are including in their malware is often an independent creation and not that NSA’s EternalBlue at all. This work shows how much hackers can independently develop these things without help from the NSA. Again, the story seems to credit the NSA for their genius in making the vuln useful instead of “EternalBlueScreen”, but for malware/ransomware, it’s largely the community that has done this work.
All this expert discussion is, of course, is fairly technical. The point isn’t that a NYTimes reporter should know all this to begin with, only that they should get both sides of a story and actually interview experts that might have opposing opinions. They should not allow those supporting their claims to hide behind anonymity where technical details cannot be challenged. Otherwise, it’s an op-ed pushing an agenda and not a new article reporting the news.
- Ransomware devastation spreads via primarily through Windows/psexec, not exploits like EternalBlue. It’s things like psexec that are to blame, not the NSA.
- Two years after Microsoft releasing a patch, exploits would exist regardless if the NSA had weaponized 0day or followed responsible disclosure, so they aren’t to blame for an exploit being used now.
- There are experts all over the place with opposing views, that the article ignores them, and protects its own sources behind anonymity, means it’s not a journalistic “article” but an “op-ed” pushing an agenda.
By the way, may other experts have great comments I would love to repeat here, that would make such a story better. A good example is this one:
I’ve answered this question privately it’s time to address it publicly: why do I have such a problem with @nicoleperlroth story? I was alerted to it by a private group of CIOs with take away was “we are doing the right thing and if hit by military grade cyber…we are good”
— Cyber Baba Yaga (@Dave_Maynor) May 27, 2019
Dave Aitel also has some good comments https://cybersecpolitics.blogspot.com/2019/05/baltimore-is-not-eternalblue.html.
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/04/programming-languages-infosec.html
Code is an essential skill of the infosec professional, but there are so many languages to choose from. What language should you learn? As a heavy coder, I thought I’d answer that question, or at least give some perspective.
Also tl;dr: whatever language you decide to learn, also learn how to use an IDE with visual debugging, rather than just a text editor. That probably means Visual Code from Microsoft. Also, whatever language you learn, stash your code at GitHub.
Let’s talk in general terms. Here are some types of languages.
- Development languages. Those scripting languages have grown up into real programming languages, but for the most part, “software development” means languages designed for that task like C, C++, Java, C#, Rust, Go, or Swift.
- Domain-specific languages. The language Lua is built into nmap, snort, Wireshark, and many games. Ruby is the language of Metasploit. Further afield, you may end up learning languages like R or Matlab. PHP is incredibly important for web development. Mobile apps may need Java, C#, Kotlin, Swift, or Objective-C.
As an experienced developer, here are my comments on the various languages, sorted in alphabetic order.
bash (and other Unix shells)
You have to learn some bash for dealing with the command-line. But it’s also a fairly completely programming language. Perusing the scripts in an average Linux distribution, especially some of the older ones, and you’ll find that bash makes up a substantial amount of what we think of as the Linux operating system. Actually, it’s called bash/Linux.
In the Unix world, there are lots of other related shells that aren’t bash, which have slightly different syntax. A good example is BusyBox which has “ash”. I mention this because my bash skills are rather poor partly because I originally learned “csh” and get my syntax variants confused.
This is the development language I use the most, simply because I’m an old-time “systems” developer. What “systems programming” means is simply that you have manual control over memory, which gives you about 4x performance and better “scalability” (performance doesn’t degrade as much as problems get bigger). It’s the language of the operating system kernel, as well as many libraries within an operating system.
But if you don’t want manual control over memory, then you don’t want to use it. It’s lack of memory protection leading to security problems makes it almost obsolete.
None of the benefits of modern languages like Rust, Java, and C#, but all of the problems of C. It’s an obsolete, legacy language to be avoided.
This is Microsoft’s personal variant of Java designed to be better than Java. It’s an excellent development language, for command-line utilities, back-end services, applications on the desktop (even Linux), and mobile apps. If you are working in a Windows environment at all, it’s an excellent choice. If you can at all use C# instead of C++, do so. Also, in the Microsoft world, there is still a lot of VisualBasic. OMG avoid that like the plague that it is, burn in a fire burn burn burn, and use C# instead.
Once a corporation reaches a certain size, it develops its own programming language. For Google, their most important language is Go.
Go is a fine language in general, but it’s main purpose is scalable network programs using goroutines. This is does asynchronous user-mode programming in a way that’s most convenient for the programmer. Since Google is all about scalable network services, Go is a perfect fit for them.
I do a lot of scalable network stuff in C, because I’m an oldtimer. If that’s something you’re interested in, you should probably choose Go over C.
This gets a bad reputation because it was once designed for browsers, but has so many security flaws that it can’t be used in browsers. You still find in-browser apps that use Java, even in infosec products (like consoles), but it’s horrible for that. If you do this, you are bad and should feel bad.
But browsers aside, it’s a great development language for command-line utilities, back-end services, apps on desktops, and apps on phones. If you want to write an app that runs on macOS, Windows, and on a Raspberry Pi running Linux, then this is an excellent choice.
BTW, “JSON” is also a language, or at least a data format, in its own right. So you have to learn that, too.
Thus, you find it embedded in security tools like nmap, snort, and Wireshark. You also see it as the scripting language in popular games. Like Go, it has extremely efficient coroutines, so you see it in the nginx web server, “OpenResty”, for backend scripting of applications.
In addition, it was the primary web scripting language for building apps on servers in the 1990s before PHP came along.
Thus, it’s a popular legacy language, but not a lot of new stuff is done in this language.
However, for writing web apps, it’s obsolete. There are so many unavoidable security problems that you should avoid using it to create new apps. Also, scalability is still difficult. Use NodeJS, OpenResty/Lua, or Ruby instead.
The same comments above that apply to bash also apply to PowerShell, except that PowerShell is Windows.
Windows has two command-lines, the older CMD/BAT command-line, and the newer PowerShell. Anything complex uses PowerShell these days. For pentesting, there are lots of fairly complete tools for doing interesting things from the command-line written in the PowerShell programming language.
Thus, if Windows is in your field, and it almost certainly is, then PowerShell needs to be part of your toolkit.
This has become one of the most popular languages, driven by universities which use it heavily as the teaching language for programming concepts. Anything academic, like machine learning, will have great libraries for Python.
A lot of hacker command-line tools are written in Python. Since such tools are often buggy and poorly documented, you’ll end up having to reading the code a lot to figure out what is going wrong. Learning to program in Python means being able to contribute to those tools.
I personally hate the language because of the schism between v2/v3, and having to constantly struggle with that. Every language has a problem with evolution and backwards compatibility, but this v2 vs v3 issue with Python seems particularly troublesome.
Also, Python is slow. That shouldn’t matter in this age of JITs everywhere and things like Webassembly, but somehow whenever you have an annoyingly slow tool, it’s Python that’s at fault.
Note that whenever I read reviews of programming languages, I see praise for Python’s syntax. This is nonsense. After a short while, the syntax of all programming languages becomes quirky and weird. Most languages these days are multi-paradigm, a combination of imperative, object-oriented, and functional. Most all are JITted. “Syntax” is the least reason to choose a language. Instead, it’s the choice of support/libraries (which are great for Python), or specific features like tight “systems” memory control (like Rust) or scalable coroutines (like Go). Seriously, stop praising the “elegant” and “simple” syntax of languages.
Like SQL for database queries, regular expressions aren’t a programming language as such, but still a language you need to learn. They are patterns that match data. For example, if you want to find all social security numbers in a text file, you looked for that pattern of digits and dashes. Such pattern matching is so common that it’s built into most tools, and is a feature of most scripting languages.
One thing to remember from an infosec point of view is that they are highly insecure. Hackers craft content to incorrectly match patterns, evade patterns, or cause “algorithmic complexity” attacks that cause simple regexes to exploded with excessive computation.
You have learn regexes enough to be familiar with the basics, but the syntax can get unreasonably complex, so few master the full regex syntax.
Ruby is a great language for writing web apps that makes security easier than with PHP, though like all web apps it still has some issues.
In infosec, the major reason to learn Ruby is Metasploit.
Rust is Mozilla’s replacement language for C and especially C++. It’s supports tight control over memory structures for “systems” programming, but is memory safe so doesn’t have all those vulnerabilities. One of these days I’ll stop programming in C and use Rust instead.
SQL, “structure query language”, isn’t a programming language as such, but it’s still a language of some sort. It’s something that you unavoidably have to learn.
One of the reasons to learn a programming language is to process data. You can do that within a programming language, but an alternative is to shove the data into a database then write queries off that database. I have a server at home just for that purpose, with large disks and multicore processors. Instead of storing things as files, and writing scripts to process those files, I stick it in tables, and write SQL queries off those tables.
Back in the day, when computers were new, before C++ become the “object oriented” language standard, there was a competing object-oriented version of C known as “Objective C”. Because, as everyone knew, object-oriented was the future, NeXT adopted this as their application programming language. Apple bought NeXT, and thus it became Apple’s programming language.
But Objective C lost the object-oriented war to C++ and became an orphaned language. Also, it was really stupid, essentially two separate language syntaxes fighting for control of your code.
Therefore, a few years ago, Apple created a replacement called Swift, which is largely based on a variant of Rust. Like Rust, it’s an excellent “systems” programming language that has more manual control over memory allocation, but without all the buffer-overflows and memory leaks you see in C.
It’s an excellent language, and great when programming in an Apple environment. However, when choosing a “language” that’s not particularly Apple focused, just choose Rust instead.
However, there’s no One Language to Rule them all. There’s good reasons to learn most languages in this list. For some tasks, the support for a certain language is so good it’s just best to learn that language to solve that task. With the academic focus on Python, you’ll find well-written libraries that solve important tasks for you. If you want to work with a language that other people know, that you can ask questions about, then Python is a great choice.
The exceptions to this are C++ and PHP. They are so obsolete that you should avoid learning them, unless you plan on dealing with legacy.
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/04/was-it-chinese-spy-or-confused-tourist.html
Politico has an article from a former spy analyzing whether the “spy” they caught at Mar-a-lago (Trump’s Florida vacation spot) was actually a “spy”. I thought I’d add to it from a technical perspective about her malware, USB drives, phones, cash, and so on.
The part that has gotten the most press is that she had a USB drive with evil malware. We’ve belittled the Secret Service agents who infected themselves, and we’ve used this as the most important reason to suspect she was a spy.
But it’s nonsense.
It could be something significant, but we can’t know that based on the details that have been reported. What the Secret Service reported was that it “started installing software”. That’s a symptom of a USB device installing drivers, not malware. Common USB devices, such as WiFi adapters, Bluetooth adapters, microSD readers, and 2FA keys look identical to flash drives, and when inserted into a computer, cause Windows to install drivers.
Visual “installing files” is not a symptom of malware. When malware does its job right, there are no symptoms. It installs invisibly in the background. Thats the entire point of malware, that you don’t know it’s there. It’s not to say there would be no visible evidence. A popular way of hacking desktops with USB drives is by emulating a keyboard/mouse that quickly types commands, which will cause some visual artifacts on the screen. It’s just that “installing files” does not lend itself to malware as being the most likely explanation.
That it was “malware” instead of something normal is just the standard trope that anything unexplained is proof of hackers/viruses. We have no evidence it was actually malware, and the evidence we do have suggests something other than malware.
Lots of travelers carry wads of cash. I carry ten $100 bills with me, hidden in my luggage, for emergencies. I’ve been caught before when the credit card company fraud detection triggers in a foreign country leaving me with nothing. It’s very distressing, hence cash.
The Politico story mentioned the “spy” also has a U.S. bank account, and thus cash wasn’t needed. Well, I carry that cash, too, for domestic travel. It’s just not for international travel. In any case, the U.S. may have been just one stop on a multi-country itinerary. I’ve taken several “round the world” trips where I’ve just flown one direction, such as east, before getting back home. $8k is in the range of cash that such travelers carry.
The same is true of phones and SIMs. Different countries have different frequencies and technologies. In the past, I’ve traveled with as many as three phones (US, Japan, Europe). It’s gotten better with modern 4G phones, where my iPhone Xs should work everywhere. (Though it’s likely going to diverge again with 5G, as the U.S. goes on a different path from the rest of the world.)
The same is true with SIMs. In the past, you pretty much needed a different SIM for each country. Arrival in the airport meant going to the kiosk to get a SIM for $10. At the end of a long itinerary, I’d arrive home with several SIMs. These days, however, with so many “MVNOs”, such as Google Fi, this is radically less necessary. However, the fact that the latest high-end phones all support dual-SIMs proves it’s still an issue.
Thus, the evidence so far is that of a normal traveler. If these SIMs/phones are indeed because of spying, we would need additional evidence. A quick analysis of the accounts associated with the SIMs and the of the contents of the phones should tells us if she’s a traveler or spy.
Again we are missing salient details. In the old days, such detectors were analog devices, because secret spy cameras were analog. These days, new equipment is almost always WiFi based. You’d detect more running software on your laptop looking for MAC addresses of camera makers than you would with those older analog devices. Or, there are tricks that look for glinting light off lenses.
Thus, the “hidden camera detector” sounds to me more like a paranoid traveler than a spy.
One of the frequently discussed things is her English language skills. As the Politico story above, her “constant lies” can be explained by difficulties speaking English. In other stories, the agents claim that she both understood and spoke English well.
Both can be true. The ability to speak foreign languages isn’t binary, on or off. I speak French and German in this middle skill level. In some cases, I can hold a conversation with apparent fluency, while in other cases I’m at a complete loss.
One issue is how understanding different speakers varies wildly. I can understand French news broadcasts with little difficulty, with nearly 100% comprehension. On the other hand, watching non-news French TV, like sitcoms, my comprehension goes to near 0%. The same is true of individuals, I many understand nearly everything one person says while understanding nearly nothing another person says.
99% comprehension is still far from 100%. I frequently understand large sections except for one essential key word. Like listening to French news, I understand everything the news story about some event that happened in that country, but I missed the country’s name at the start. Yes, I know there were storms, mudslides, floods, 100,000 without power, 300 deaths — I just haven’t a clue where in the world that happened.
Diplomats around the world recognize this. They often speak English well, use English daily, and yet in formal functions they still use translators, because there’s always a little bit they won’t understand.
Thus, we know any claim by the Secret Service that her language skills were adequate are false.
So in conclusion, we don’t see evidence pointing to a spy. Instead, we see a careful curation of evidence by the secret service and reporters to push the spying story. We haven’t seen any reporter question what other USB devices can cause software to load other than malware. She may be a spy, of course, but so far, there’s no evidence of anything other than a confused/crazy tourist.
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/04/assange-indicted-for-breaking-password.html
In today’s news, after 9 years holed up in the Ecuadorian embassy, Julian Assange has finally been arrested. The US DoJ accuses Assange for trying to break a password. I thought I’d write up a technical explainer what this means.
According to the US DoJ’s press release:
Julian P. Assange, 47, the founder of WikiLeaks, was arrested today in the United Kingdom pursuant to the U.S./UK Extradition Treaty, in connection with a federal charge of conspiracy to commit computer intrusion for agreeing to break a password to a classified U.S. government computer.
It seems the indictment is based on already public information that came out during Manning’s trial, namely this log of chats between Assange and Manning, specifically this section where Assange appears to agree to break a password:
What this says is that Manning hacked a DoD computer and found the hash “80c11049faebf441d524fb3c4cd5351c” and asked Assange to crack it. Assange appears to agree.
So what is a “hash”, what can Assange do with it, and how did Manning grab it?
Computers store passwords in an encrypted (sic) form called a “one way hash”. Since it’s “one way”, it can never be decrypted. However, each time you log into a computer, it again performs the one way hash on what you typed in, and compares it with the stored version to see if they match. Thus, a computer can verify you’ve entered the right password, without knowing the password itself, or storing it in a form hackers can easily grab. Hackers can only steal the encrypted form, the hash.
When they get the hash, while it can’t be decrypted, hackers can keep guessing passwords, performing the one way algorithm on them, and see if they match. With an average desktop computer, they can test a billion guesses per second. This may seem like a lot, but if you’ve chosen a sufficiently long and complex password (more than 12 characters with letters, numbers, and punctuation), then hackers can’t guess them.
It’s unclear what format this password is in, whether “NT” or “NTLM”. Using my notebook computer, I could attempt to crack the NT format using the hashcat password crack with the following command:
hashcat -m 3000 -a 3 80c11049faebf441d524fb3c4cd5351c ?a?a?a?a?a?a?a
As this image shows, it’ll take about 22 hours on my laptop to crack this. However, this doesn’t succeed, so it seems that this isn’t in the NT format. Unlike other password formats, the “NT” format can only be 7 characters in length, so we can completely crack it.
Instead of brute-force trying all possible combinations of characters each time we have a new password, we could do the huge calculation just once and save all the “password -> hash” combinations to a disk drive. Then, each time we get a new hash from hacking a computer, we can just do a simple lookup. However, this won’t work in practice, because the number of combinations is just too large — even if we used all the disk drives in the world to store the results, it still wouldn’t be enough.
But there’s a neat trick called “Rainbow Tables” that does a little bit of both, using both storage and computation. If cracking a password would be of 64 bits of difficulty, you can instead use 32 bits of difficulty for storage (storing 4 billion data points) and do 32 bits worth of computation (doing 4 billion password hashes). In other words, while doing 64 bits of difficulty is prohibitively difficult, 32 bits of both storage and computation means it’ll take up a few gigabytes of space and require only a few seconds of computation — an easy problem to solve.
That’s what Assange promises, that they have the Rainbow Tables and expertise needed to crack the password.
However, even then, the Rainbow Tables aren’t complete. While the “NT” algorithm has a limit of 7 characters, the “NTLM” has no real limit. Building the tables in the first place takes a lot of work. As far as I know, we don’t have NTLM Rainbow Tables for passwords larger than 9 complex characters (upper, lower, digits, punctuation, etc.).
I don’t know the password requirements that were in effect back then 2010, but there’s a good chance it was on the order of 12 characters including digits and punctuation. Therefore, Rainbow Cracking wouldn’t have been possible.
If we can’t brute-force all combinations of a 12 character password, or use Rainbow Tables, how can we crack it? The answer would be “dictionary attacks”. Over the years, we’ve acquired real-world examples of over a billion passwords people have used in real accounts. We can simply try all those, regardless of length. We can also “mutate” this dictionary, such as adding numbers on the end. This requires testing trillions of combinations, but with hardware that can try a billion combinations per second, it’s not too onerous.
But there’s still a limit to how effective we can be at password cracking. As I explain in other posts, the problem is exponential. Each additional character increases the difficult by around 100 times. In other words, if you can brute-force all combinations of a password of a certain length in a week, then adding one character to the length means you’ll take now 100 weeks, or two years. That’s why even nation state spies, like the NSA, with billions of dollars of hardware, may not be able to crack this password.
|LinkedIn passwords, how long it takes a laptop or nation state to crack|
Now let’s tackle the question of how Manning got the hash in the first place. It appears the issue is that Manning wanted to logon as a different user, hiding her tracks. She therefore wanted to grab the other person’s password hash, crack the password, then use it to logon, with all her nefarious activities now associated with the wrong user.
She can’t simply access the other user account. That’s what operating systems do, prevent you from accessing other parts of the disk that don’t belong to you.
To get around this, she booted the computer with a different operating system from a CD drive, with some sort of Linux distro. From that operating system, she had full access to the drive. As the chatlog reveals, she did the standard thing that all hackers do, copy over the SAM file, then dump the hashes from it. Here is an explanation from 2010 that roughly describes exactly what she did.
The term “Linux” was trending today on Twitter by people upset by the way the indictment seemed to disparage it as some sort of evil cybercrime tool, but I don’t read it that way. The evil cybercrime act the indictment refers to use is booting another operating system from a CD. It no more disparages Linux than it disparages CDs. It’s the booting an alternate operating system and stealing the SAM file that demonstrates criminality, not CDs or Linux.
Note that stealing another account’s password apparently wasn’t about being able to steal more documents. This can become an important factor later on when appealing the case.
The documents weren’t on the computer, but on the network. Thus, while booting Linux from a CD would allow full access to all the documents on the local desktop computer, it still wouldn’t allow access to the server.
Apparently, it was just another analyst’s account Manning was trying to hijack, who had no more permissions on the network than she did. Thus, she wouldn’t have been accessing any files she wasn’t already authorized to access.
Therefore, as CFAA/4thA expert Orin Kerr tweets, there may not have been a CFAA violation:
Second, it’s based on a relatively aggressive (and somewhat controversial) view of the Computer Fraud and Abuse Act — that accessing files in violation of an order on classified materials is an unauthorized access.
— Orin Kerr (@OrinKerr) April 11, 2019
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/03/some-notes-on-raspberry-pi.html
I keep seeing this article in my timeline today about the Raspberry Pi. I thought I’d write up some notes about it.
The Raspberry Pi costs $35 for the board, but to achieve a fully functional system, you’ll need to add a power supply, storage, and heatsink, which ends up costing around $70 for the full system. At that price range, there are lots of alternatives. For example, you can get a fully function $99 Windows x86 PC, that’s just as small and consumes less electrical power.
There are a ton of Raspberry Pi competitors, often cheaper with better hardware, such as a Odroid-C2, Rock64, Nano Pi, Orange Pi, and so on. There are also a bunch of “Android TV boxes” running roughly the same hardware for cheaper prices, that you can wipe and reinstall Linux on. You can also acquire Android phones for $40.
However, while “better” technically, the alternatives all suffer from the fact that the Raspberry Pi is better supported — vastly better supported. The ecosystem of ARM products focuses on getting Android to work, and does poorly at getting generic Linux working. The Raspberry Pi has the worst, most out-of-date hardware, of any of its competitors, but I’m not sure I can wholly recommend any competitor, as they simply don’t have the level of support the Raspberry Pi does.
The defining feature of the Raspberry Pi isn’t that it’s a small/cheap computer, but that it’s a computer with a bunch of GPIO pins. When you look at the board, it doesn’t just have the recognizable HDMI, Ethernet, and USB connectors, but also has 40 raw pins strung out across the top of the board. There’s also a couple extra connectors for cameras.
The concept wasn’t simply that of a generic computer, but a maker device, for robot servos, temperature and weather measurements, cameras for a telescope, controlling christmas light displays, and so on.
I think this is underemphasized in the above story. The reason it finds use in the factories is because they have the same sorts of needs for controlling things that maker kids do. A lot of industrial needs can be satisfied by a teenager buying $50 of hardware off Adafruit and writing a few Python scripts.
On the other hand, support for industrial uses is nearly non-existant. The reason commercial products cost $1000 is because somebody will answer your phone call, unlike the teenager whose currently out at the movies with their friends. However, with more and more people having experience with the Raspberry Pi, presumably you’ll be able to hire generic consultants soon that can maintain these juryrigged solutions.
One thing that’s interesting is how much that 40 pin GPIO interface has become a standard. There are a ton of competing devices that support that same standard, even with Intel x86 Windows computers. The Raspberry Pi foundation has three boards that support this standard, the RPi Zero, the Model A, and the Model B. Competitors have both smaller, more efficient boards to choose from, as well as larger, more powerful boards. But as I said, nothing is as well supported as Raspberry Pi boards themselves.
Raspberry Pi class machines are overpowered for a lot of maker projects. There are competing systems, like the Arduino, ESP32, and Micro:Bit. As a hacker, I love the ESP32 class devices. They come with a full WiFi stack and can be placed anywhere.
If you are buying a Raspberry Pi, I recommend Adafruit. Not only do they have the devices cheap ($35), they’ll have a lot of support for maker hardware that you may want to add to the device.
After buying the board, you have to choose the accessories to get it working.
Your first choice will be a power supply. You’ll be tempted to use the USB chargers and cables you have lying around the house, and it’ll appear to work at first, but will cause CPU throttling problems and file corruption. You need to get either the $8 “official” power supply, or one of those fast charging devices, like those from Anker. Remember that it’s not just a matter of the power supply providing enough current/amps, but also cables with 20 AWG wires that can handle the current.
Your next choice will be the flash drive for booting the computer. One choice is micro SD cards. You should choose cards with the “A1” rating, which are faster at random file access. Most other microSD cards are optimized for large sequential transfers, and are painfully slow at random accesses. If you write a lot of data to the device, you may need to get a card rated for “endurance” instead — micro SD cards wear out quickly.
Or, you may consider a real SSD connected to the USB port. You can get a $20 120-gig SSD and a $8 USB-to-SATA adapter. This will perform much faster, and not have the data corruption issues that micro SD cards have. You need an independent power supply for the drive, as it can’t be powered wholly from the USB port.
Your next decision will be a heatsink. The Raspberry Pi generates a lot of heat at full load. People assume ARM is efficient, but it’s not, and the Broadcom ARM CPU used by the RPi is very bad. Unless you have a heatsink, instead of running at 1.4-GHz, it’ll spend most of it’s time throttled back to 600-MHz. Because of their size, your choice of heatsink and fan depends upon your choice of case. There are some nice aluminum cases that act as a heatsink. You can also get combo kits on Amazon.com for $15 that include the case, heatsink, and fan together.
If looking at a competing device (e.g. Odroid-C2, Rock64), get one that supports eMMC. It’s much faster and more reliable than micro SD cards. For home server applications, its worth getting a lesser supported platform in order to get eMMC. It makes a huge difference. I stopped using Raspberry Pi’s for home server applications and went with Odroid-C2 machines instead, mostly because of the eMMC, but also because they have more RAM and faster Ethernet. I may switch to the Rock64 device in the future because of its support for USB 3.0. I have one on-order, but it’s taking (so far) more than a month to arrive.
As for the ARM ecosystem, there seems to be a lot of misunderstanding about “power efficiency”. People keep claiming they are more efficient. They aren’t. They consume less power by being slower. Scaled to the same performance, ARM CPUs use the same amount of power as Intel CPUs. Now that ARM has more powerful CPUs close to Intel in speed, and Intel now has their low speed “Atom” processors, we see that indeed they have roughly the same efficiency. The Raspberry Pi’s Broadcom CPU is extremely inefficient. It uses the decade old 40nm manufacturing process, which means it consumes a lot of power. Intel’s latest Atom processors built on 22nm or 14nm technology consume a lot less power. There are things that impact efficiency, but the least important of which is whether it’s ARM or Intel x86, or RISC vs. CISC.
For hackers, there’s a lot you can do with a Raspberry Pi (or competitor). We are surrounded by things that we can hack. For example, you can use it to hack the CEC feature of HDMI to control your TV. You can attach a cheap RTL-SDR device and monitor radio frequencies. You can connect it to the CAN bus of your car. You can connect it to your ZigBee devices in your home and control your lights. If there’s a wire or radio wave around you, it’s something you can start hacking with the RPi.
A feel the above article does the subject a disservice. It’s less “industrial IoT” and more “crossover between maker culture and industry”.
Every geek should get a Raspberry Pi and play with it, even if it’s only as simple as a Owncloud/Nextcloud backup server sitting in a closet. Don’t skimp on the power supply, as people who do get frustrated, you need a charger rated for at least 2.4 amps and a charging cable with thicker 20 AWG wires. If going the micro SD route, choose “A1” or “endurance” rated cards. Consider going a USB SSD route instead.
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/03/a-quick-lesson-in-confirmation-bias.html
In my experience, hacking investigations are driven by ignorance and confirmation bias. We regularly see things we cannot explain. We respond by coming up with a story where our pet theory explains it. Since there is no alternative explanation, this then becomes evidence of our theory, where this otherwise inexplicable thing becomes proof.
For example, take that “Trump-AlfaBank” theory. One of the oddities noted by researchers is lookups for “trump-email.com.moscow.alfaintra.net“. One of the conspiracy theorists explains has proof of human error, somebody “fat fingered” the wrong name when typing it in, thus proving humans were involved in trying to communicate between the two entities, as opposed to simple automated systems.
But that’s because this “expert” doesn’t know how DNS works. Your computer is configured to automatically put local suffices on the end of names, so that you only have to lookup “2ndfloorprinter” instead of a full name like “2ndfloorprinter.engineering.example.com”.
When looking up a DNS name, your computer may try to lookup the name both with and without the suffix. Thus, sometimes your computer looks up “www.google.com.engineering.exmaple.com” when it wants simply “www.google.com”.
Apparently, Alfabank configures its Moscow computers to have a suffix “moscow.alfaintra.net”. That means any DNS name that gets resolved will sometimes get this appended, so we’ll sometimes see “www.google.com.moscow.alfaintra.net”.
Since we already know there were lookups from that organization for “trump-email.com”, the fact that we also see “trump-email.com.moscow.alfaintra.net” tells us nothing new.
In other words, the conspiracy theorists didn’t understand it, so came up with their own explanation, and this confirmed their biases. In fact, there is a simpler explanation that neither confirms nor refutes anything.
The reason for the DNS lookups for “trump-email.com” are still unexplained. Maybe they are because of something nefarious. The Trump organizations had all sorts of questionable relationships with Russian banks, so such a relationship wouldn’t be surprising. But here’s the thing: just because we can’t come up with a simpler explanation doesn’t make them proof of a Trump-Alfabank conspiracy. Until we know why those lookups where generated, they are an “unknown” and not “evidence”.
The reason I write this post is because of this story about a student expelled due to “grade hacking”. It sounds like this sort of situation, where the IT department saw anomalies it couldn’t explain, so the anomalies became proof of the theory they’d created to explain them.
Unexplained phenomena are unexplained. They are not evidence confirming your theory that explains them.
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/02/a-basic-question-about-tcp.html
So on Twitter, somebody asked this question:
I have a very basic computer networking question: when sending a TCP packet, is the packet ACK’ed at every node in the route between the sender and the recipient, or just by the final recipient?
Remember that the telephone network was already a cyberspace before the Internet came around. It allowed anybody to create a connection to anybody else. Most circuits/connections were 56-kilobits-per-secondl using the “T” system, these could be aggregated into faster circuits/connections. The “T1” line consisting of 1.544-mbps was an important standard back in the day.
In the phone system, when a connection is established, resources must be allocated in every switch along the path between the source and destination. When the phone system is overloaded, such as when you call loved ones when there’s been an earthquake/tornado in their area, you’ll sometimes get a message “No circuits are available”. Due to congestion, it can’t reserve the necessary resources in one of the switches along the route, so the call can’t be established.
“Congestion” is important. Keep that in mind. We’ll get to it a bit further down.
The idea that each router needs to ACK a TCP packet means that the router needs to know about the TCP connection, that it needs to reserve resources to it.
This was actually the original design of the the OSI Network Layer.
Let’s rewind a bit and discuss “OSI”. Back in the 1970s, the major computer companies of the time all had their own proprietary network stacks. IBM computers couldn’t talk to DEC computers, and neither could talk to Xerox computers. They all worked differently. The need for a standard protocol stack was obvious.
To do this, the “Open Systems Interconnect” or “OSI” group was established under the auspices of the ISO, the international standards organization.
The first thing the OSI did was create a model for how protocol stacks would work. That’s because different parts of the stack need to be independent from each other.
For example, consider the local/physical link between two nodes, such as between your computer and the local router, or your router to the next router. You use Ethernet or WiFi to talk to your router. You may use 802.11n WiFi in the 2.4GHz band, or 802.11ac in the 5GHz band. However you do this, it doesn’t matter as far as the TCP/IP packets are concerned. This is just between you and your router, and all the information is stripped out of the packets before they are forwarded to across the Internet.
Likewise, your ISP may use cable modems (DOCSIS) to connect your router to their routers, or they may use xDSL. This information is likewise is stripped off before packets go further into the Internet. When your packets reach the other end, like at Google’s servers, they contain no traces of this.
There are 7 layers to the OSI model. The one we are most interested in is layer 3, the “Network Layer”. This is the layer at which IPv4 and IPv6 operate. TCP will be layer 4, the “Transport Layer”.
The original idea for the network layer was that it would be connection oriented, modeled after the phone system. The phone system was already offering such a service, called X.25, which the OSI model was built around. X.25 was important in the pre-Internet era for creating long-distance computer connections, allowing cheaper connections than renting a full T1 circuit from the phone company. Normal telephone circuits are designed for a continuous flow of data, whereas computer communication is bursty. X.25 was especially popular for terminals, because it only needed to send packets from the terminal when users were typing.
Layer 3 also included the possibility of a connectionless network protocol, like IPv4 and IPv6, but it was assumed that connection oriented protocols would be more popular, because that’s how the phone system worked, which meant that was just how things were done.
The designers of the early Internet, like Bob Kahn (pbuh) and Vint Cerf (pbuh), debated this. They looked at Cyclades, a French network, which had a philosophical point of view called the end-to-end principle, by which I mean the End-To-End Principle. This principle distinguishes the Internet from the older phone system. The Internet is an independent network from the phone system, rather than an extension of the phone system like X.25.
The phone system was defined as a smart network with dumb terminals. Your home phone was a simple circuit with a few resisters, speaker, and microphone. It had no intelligence. All the intelligence was within the network. Unix was developed in the 1970s to run on phone switches, because it was the switches inside the network that were intelligent, not the terminals on the end. That you are now using Unix in your iPhone is the opposite of what they intended.
Even mainframe computing was designed this way. Terminals were dumb devices with just enough power to display text. All the smart processing of databases happened in huge rooms containing the mainframe.
The end-to-end principle changes this. It instead puts all the intelligence on the ends of the network, with smart terminals and smart phones. It dumbs down the switches/routers to their minimum functionality, which is to route packets individually with no knowledge about what connection they might be a part of. A router receives a packet on a link, looks at it’s destination IP address, and forwards it out the appropriate link in necessary direction. Whether it eventually reaches its destination is of no concern to the router.
In the view of the telephone network, new applications meant upgrading the telephone switches, and providing the user a dumb terminal. Movies of the time, like 2001: A Space Odyssey and Blade Runner would show video phone calls, offered by AT&T, with the Bell logo. That’s because such applications where always something that the phone company would provide in the future.
With the end-to-end principle the phone company simply routes the packets, and the apps are something the user chooses separately. You make video phones calls today, but you use FaceTime, Skype, WhatsApp, Signal, and so on. My wireless carrier is AT&T, but it’s absurd thinking I would ever make a video phone call using an app provided to me by AT&T, as I was shown in the sci-fi movies of my youth.
So now let’s talk about congestion or other errors that cause packets to be lost.
It seems obvious that the best way to deal with lost packets is at the point where it happens, to retransmit packets locally instead of all the way from the remote ends of the network.
This turns out not to be the case. Consider streaming video from Netflix when congestion happens. When that happens, it wants to change the encoding of the video to a lower bit rate. You see this when watching Netflix during prime time (6pm to 11pm), where videos are of poorer quality than during other times of the day. It’s streaming them at a lower bit rate due to their system being overloaded.
If routers try to handle dropped packets locally, then they give limited feedback about the congestion. It would require some sort of complex signaling back to the ends of the network informing them about congestion in the middle.
With the end-to-end principle, when congestion happens, when a router can’t forward a packet, it silently drops it, performing no other processing or signaling about the event. It’s up to the ends to notice this. The sender doesn’t receive an ACK, and after a certain period of time, resends the data. This in turn allows the app to discover congestion is happening, and to change its behavior accordingly, such as lowering the bitrate at which its sending video.
Consider what happens with a large file download, such as your latest iOS update, which can be a gigabyte in size. How fast can the download happen?
Well, with TCP, it uses what’s known as the slow start algorithm. It starts downloading the file slowly, but keeps increasing the speed of transmission until a packet is dropped, at which point it backs off.
You can see this behavior when visiting a website like speedtest.net. You see it slowly increase the speed until it reaches its maximum level. This isn’t a property of the SpeedTest app, but a property of how TCP works.
TCP also tracks the round trip time (RTT), the time it takes for a packet to be acknowledged. If the two ends are close, RTT should be small, and the amount of time waiting to resend a lost packet should be shorter, which means it can respond to congestion faster, and more carefully tune the proper transmit rate.
This is why buffer bloat is a problem. When a router gets overloaded, instead of dropping a packet immediately, it can instead decide to buffer the packet for a while. If the congestion is transitory, then it’ll be able to send the packet a tiny bit later. Only if the congestion endures, and the buffer fills up, will it start dropping packets.
This sounds like a good idea, to improve reliability, but it messes up TCP’s end-to-end behavior. It can no longer reliably reliably measure RTT, and it can no longer detect congestion quickly and backoff on how fast it’s transmitting, causing congestion problems to be worse. It means that buffering in the router doesn’t work, because when congestion happens, instead of backing off quickly, TCP stacks on the ends will continue to transmit at the wrong speed, filling the buffer. In many situations, buffering increases dropped packets instead of decreasing them.
Thus, the idea of trying to fix congestion in routers by adding buffers is a bad idea.
Routers will still do a little bit of buffering. Even on lightly loaded networks, two packets will arrive at precisely the same time, so one needs to be sent before the other. It’s insane to drop the other at this point when there’s plenty of bandwidth available, so routers will buffer a few packets. The solution is to reduce buffer to the minimum, but not below the minimum.
Consider Google’s HTTP/3 protocol, how they are moving to UDP instead of TCP. There are various reasons for doing this, which I won’t go into here. Notice how if routers insisted on being involved in the transport layer, of retransmitting TCP packets locally, how the HTTP/3 upgrade on the ends within the browser wouldn’t work. HTTP/3 takes into consideration information that is encrypted within the the protocol, something routers don’t have access to.
This end-to-end decision was made back in the early 1970s as the Internet has wildly evolved. Our experience 45 years later is that this decision was a good one.
Now let’s discuss IPv6 and NAT.
As you know, IPv4 uses 32-bit network addresses, which have only 4-billion combinations, allowing only 4-billion devices on the Internet. However, there are more than 10-billion devices on the network currently, more than 20-billion by some estimates.
The way this is handled is network address translation or NAT. Your home router has one public IPv4 address, like 188.8.131.52. Then, internal to your home or business, you get a local private IPv4 address, likely in the range 10.x.x.x or 192.168.x.x. When you transmit packets, your local router changes the source address from the private one to the public one, and on incoming packets, changes the public address back to your private address.
It does this by tracking the TCP connection, tracking the source and destination TCP port numbers. It’s really a TCP/IP translator rather than just an IP translator.
This violates the end-to-end principle, but only a little bit. While the NAT is translating addresses, it’s still not doing things like acknowledging TCP packets. That’s still the job of the ends.
As we all know, IPv6 was created in order to expand the size of addresses, from 32-bits to 128-bits, making a gazillion addresses available. It’s often described in terms of the Internet running out addresses needing more, but that’s not the case. With NAT, the IPv4 Internet will never run out of addresses.
Instead, what IPv6 does is preserve the end-to-end principle, by keeping routers dumb.
I mention this because I find discussions of IPv6 a bit tedious. The standard litany is that we need IPv6 so that we can have more than 4-billion devices on the Internet, and people keep repeating this despite there being more than 10-billion devices on the IPv4 Internet.
As I stated above, this isn’t just a basic question, but the basic question. It’s at the center of a whole web of interlocking decisions that define the nature of cyberspace itself.
From the time the phone system was created in the 1800s up until the 2007 release of the iPhone, phone companies wanted to control the applications that users ran on their network. The OSI Model that you learn as the basis of networking isn’t what you think it is: it was designed with the AT&T phone network and IBM mainframes being in control over your applications.
The creation of TCP/IP and the Internet changed this, putting all the power in the hands of the ends of the network. The version of the OSI Model you end up learning is a retconned model, with all the original important stuff stripped out, and only the bits that apply to TCP/IP left remaining.
Of course, now we live in a world monopolized by the Google, the Amazon, and the Facebook, so we live in some sort of dystopic future. But it’s not a future dominated by AT&T.
|Haywood Floyd phones home in 2001: A Space Oddesy|
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/02/how-bezos-dick-pics-mightve-been-exposed.html
In the news, the National Enquirer has extorted Amazon CEO Jeff Bezos by threatening to publish the sext-messages/dick-pics he sent to his mistress. How did the National Enquirer get them? There are rumors that maybe Trump’s government agents or the “deep state” were involved in this sordid mess. The more likely explanation is that it was a simple hack. Teenage hackers regularly do such hacks — they aren’t hard.
To start with, from which end were they stolen? As a billionaire, I’m guessing Bezos himself has pretty good security, so I’m going to assume it was the recipient, his girlfriend, who was hacked.
The hack starts by finding the email address she uses. People use the same email address for both public and private purposes. There are lots of “people finder” services on the Internet that you can use to track this information down. These services are partly scams, using “dark patterns” to get you to spend tons of money on them without realizing it, so be careful.
Using one of these sites, I quickly found a couple of a email accounts she’s used, one at HotMail, another at GMail. I’ve blocked out her address. I want to describe how easy the process is, I’m not trying to doxx her.
Next, I enter those email addresses into the website http://haveibeenpwned.com to see if hackers have ever stolen her account password. When hackers break into websites, they steal the account passwords, and then exchange them on the dark web with other hackers. The above website tracks this, helping you discover if one of your accounts has been so compromised. You should take this opportunity to enter your email address in this site to see if it’s been so “pwned”.
I find that her email addresses have been included in that recent dump of 770 million accounts called “Collection#1”.
The http://haveibeenpwned.com won’t disclose the passwords, only the fact they’ve been pwned. However, I have a copy of that huge Collection#1 dump, so I can search it myself to get her password. As this output shows, I get a few hits, all with the same password.
At this point, I have a password, but not necessarily the password to access any useful accounts. For all I know, this was the password she chose for http://UnderwaterBasketWeaving.com, which wouldn’t be terribly useful to me.
But most people choose the same password across all their websites. Therefore, chances are good this password is the one she uses for email, for her Apple iPhone, for Facebook, and for Twitter.
I can’t know this, because even testing this password on those sites (though without accessing the information in her accounts) may be violation of the law. I say “may be” because nobody knows, and I’m not willing to be the first test case to go to trial and find out.
But the National Enquirer is (evidently) a bunch of sleazeballs, so I’m assuming they grabbed a copy of Collection#1 and researched all the accounts of people that interest them to find out precisely this sort of information, and extort them with it. It’s real easy, as this post demonstrates. Or, if they didn’t do it themselves, they are wildly known in the United States as one of the few media outlets who would pay for such information if an independent hacker were to obtain it.
So which accounts did the sexting images come from? Were they SMS/iMessage messages? Were they sent via Twitter/Facebook private messages, like with the Anthony Wiener scandal? Were they sent via email? Or was some encrypted app like Signal used?
If it’s Twitter or Facebook, then knowing the email address and passwords are enough. A hacker knowing this information can simply log in and view the old messages without the owner of the account knowing.
They do offer something called “two-factor authentication”, such as sending a numeric code to your phone that must be entered along with the password, but most people haven’t enabled this. Furthermore, using the phone as a second-factor has it’s own hacks that skilled hackers can bypass. Phone numbers that belong to her are also on that “people finder” report I paid for:
If the sexy images were sent via email, then likewise simply knowing her email password would grant somebody access to them. GMail makes it really easy to access old emails that you don’t care about anymore. You can likewise enable “two-factor authentication” to protect your email account, with a better factor that just text messages to your phone.
If she has an iPhone, and the pics were sent as normal text messages, then hacking her Apple account might reveal them. By default, iPhone’s back these up to the cloud (“iCloud”). But not so fast. Apple has strong-armed their customers to enable “two-factor authentication”, so the hacker would need to intercept the message.
But Apple text messages don’t always go across the phone system. When it’s two iPhones involved, or Apple-to-iPhone, such messages go across their end-to-end encrypted iMessage service, which even state actors like the NSA and FBI have trouble penetrating. Apple does a better job than anybody protecting their phones, such that even if I knew the password to your account, I’m not sure I could steal your sexting images.
Lastly, maybe an encrypted messaging service like Signal was used. This is generally pretty secure, though hey have a number of holes. For example, when receiving a sexting message, the user can simply take a screenshot. At that point, we are back into the “cloud backup” situation we were before.
Maybe it wasn’t her phone/accounts that were hacked. Maybe she shared them with her siblings, friends, or agent. Diligent hackers go after those accounts as well. Famous celebrity hackers often get nude pics via this route, rather than hacking the celebrity directly. That “people finder” report includes a list of her close relatives, and enough information I can track down her other associates.
So here’s how you can avoid getting into the same situation:
- Setup different email accounts, ones you use for personal reasons that can easily be discovered, and ones you use in other situations that cannot be tied to your name.
- Don’t reuse passwords, as was done in the case, where all the accounts I found have the same password. At least one site where you’ve used that password will get hacked and have that password shared in the underground. Use unique password for major sites. Knowing your GMail password should not give me access to your iPhone account because that’s a different password. Write these passwords down on paper and store them in a safe place. For unimportant accounts you don’t care about, sure, go ahead and use the same password, or common password pattern, for all of them. They’ll get hacked but you don’t care.
- Check https://haveibeenpwned.com to see how many of your accounts have pwned in hacker attacks against websites. Obviously, the passwords you used for those websites should never be used again.
- If you send sexy messages and you are a celebrity, there are large parts of the hacker underground who specialize in trying to steal them.
- No, I didn’t hack her accounts. However, her email addresses and some passwords are public on the Internet for hackers who look for them.
- Some passwords are public. That doesn’t mean the important passwords that would gain access to real accounts are public. I didn’t try them to find out.
- Even though I didn’t fully test this, people get their sensitive information (like nude pics) stolen this way all the time.
- Getting celebrity nude pics is fairly simple, such as through password reuse and phishing, so there is no reason to consider conspiracy theories at this time.
Post Syndicated from Robert Graham original https://blog.erratasec.com/2019/01/passwords-in-file.html
My dad is on some sort of committee for his local home owners association. He asked about saving all the passwords in a file stored on Microsoft’s cloud OneDrive, along with policy/procedures for the association. I assumed he called because I’m an internationally recognized cyberexpert. Or maybe he just wanted to chat with me*. Anyway, I thought I’d write up a response.
The most important rule of cybersecurity is that it depends upon the risks/costs. That means if what you want to do is write down the procedures for operating a garden pump, including the passwords, then that’s fine. This is because there’s not much danger of hackers exploiting this. On the other hand, if the question is passwords for the association’s bank account, then DON’T DO THIS. Such passwords should never be online. Instead, write them down and store the pieces of paper in a secure place.
OneDrive is secure, as much as anything is. The problem is that people aren’t secure. There’s probably one member of the home owner’s association who is constantly infecting themselves with viruses or falling victim to scams. This is the person who you are giving OneDrive access to. This is fine for the meaningless passwords, but very much not fine for bank accounts.
OneDrive also has some useful backup features. Thus, when one of your members infects themselves with ransomware, which will encrypt all the OneDrive’s contents, you can retrieve the old versions of the documents. I highly recommend groups like the home owner’s association use OneDrive. I use it as part of my Office 365 subscription for $99/year.
Just don’t do this for banking passwords. In fact, not only should you not store such a password online, you should strongly consider getting “two factor authentication” setup for the account. This is a system where you need an additional hardware device/token in addition to a password (in some cases, your phone can be used as the additional device). This may not work if multiple people need to access a common account, but then, you should have multiple passwords, for each individual, in such cases. Your bank should have descriptions of how to set this up. If your bank doesn’t offer two factor authentication for its websites, then you really need to switch banks.
For individuals, write your passwords down on paper. For elderly parents, write down a copy and give it to your kids. It should go without saying: store that paper in a safe place, ideally a safe, not a post-it note glued to your monitor. Again, this is for your important passwords, like for bank accounts and e-mail. For your Spotify or Pandora accounts (music services), then security really doesn’t matter.
Lastly, the way hackers most often break into things like bank accounts is because people use the same password everywhere. When one site gets hacked, those passwords are then used to hack accounts on other websites. Thus, for important accounts, don’t reuse passwords, make them unique for just that account. Since you can’t remember unique passwords for every account, write them down.
You can check if your password has been hacked this way by checking http://haveibeenpwned.com and entering your email address. Entering my dad’s email address, I find that his accounts at Adobe, LinkedIn, and Disqus has been discovered by hackers (due to hacks of those websites) and published. I sure hope whatever these passwords were that they are not the same or similar to his passwords for GMail or his bank account.
* the lame joke at the top was my dad’s, so don’t blame me 🙂
Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/12/notes-on-build-hardening.html
I thought I’d comment on a paper about “build safety” in consumer products, describing how software is built to harden it against hackers trying to exploit bugs.
What is build safety?
However, C/C++ is “unsafe”, and is the most popular language for building stuff that interacts with the network. In other cases, while the language itself may be safe, it’ll use underlying infrastructure (“libraries“) written in C/C++. When we are talking about hardening builds, making them safe or security, we are talking about C/C++.
How software is built
The way stack guards work is to stick a carefully constructed value in between each stack frame, known as a canary. Right before the function exits, it’ll check this canary in order to validate it hasn’t been corrupted. If corruption is detected, the program exits, or crashes, to prevent worse things from happening.
Since this feature was added, many vulnerabilities have been found that evade the default settings. Recently, -fstack-protector-strong has been added to gcc that significantly increases the number of protected functions. The setting -fstack-protector-all is still avoided due to performance cost, as even trivial functions which can’t possibly overflow are still instrumented.
The other major dynamic memory structure is known as the heap (or the malloc region). When a function returns, everything in its scratchpad memory on the stack will lost. If something needs to stay around longer than this, then it must be allocated from the heap rather than the stck.
Whereas stack guards change the code generated by the compiler, heap guards don’t. Instead, the heap exists in library functions.
The most common library added by a linker is known as glibc, the standard GNU C library. However, this library is about 1.8-megabytes in size. Many of the home devices in the paper above may only have 4-megabytes total flash drive space, so this is too large. Instead, most of these home devices use an alternate library, something like uClibc or musl, which is only 0.6-megabytes in size. In addition, regardless of the standard library used for other features, a program my still replace the heap implementation with a custom one, such as jemalloc.
Even if using a library that does heap guards, it may not be enabled in the software. If using glibc, a program can still turn off checking internally (using mallopt), or it can be disabled externally, before running a program, by setting the environment variable MALLOC_CHECK_.
The above paper didn’t evaluate heap guards. I assume this is because it can be so hard to check.
Obviously a useful mitigation step would be to randomize the layout of memory, so nothing is in a predictable location. This is known as address space layout randomization or ASLR.
The word layout comes from the fact that the when a program runs, it’ll consist of several segments of memory. The basic list of segments are:
- the executable code
- static values (like strings)
- global variables
- heap (growable)
- stack (growable)
- mmap()/VirtualAlloc() (random location)
Historically, the first few segments are laid out sequentially, starting from address zero. Remember that user-mode programs have virtual memory, so what’s located starting at 0 for one program is different from another.
As mentioned above, the heap and the stack need to be able to grow as functions are called data allocated from the heap. The way this is done is to place the heap after all the fixed-sized segments, so that it can grow upwards. Then, the stack is placed at the top of memory, and grows downward (as functions are called, the stack frames are added at the bottom).
Sometimes a program may request memory chunks outside the heap/stack directly from the operating system, such as using the mmap() system call on Linux, or the VirtualAlloc() system call on Windows. This will usually be placed somewhere in the middle between the heap and stack.
With ASLR, all these locations are randomized, and can appear anywhere in memory. Instead of growing contiguously, the heap has to sometimes jump around things already allocated in its way, which is a fairly easy problem to solve, since the heap isn’t really contiguous anyway (as chunks are allocated and freed from the middle). However, the stack has a problem. It must grow contiguously, and if there is something in its way, the program has little choice but to exit (i.e. crash). Usually, that’s not a problem, because the stack rarely grows very large. If it does grow too big, it’s usually because of a bug that requires the program to crash anyway.
ASLR for code
The problem for executable code is that for ASLR to work, it must be made position independent. Historically, when code would call a function, it would jump to the fixed location in memory where that function was know to be located, thus it was dependent on the position in memory.
To fix this, code can be changed to jump to relative positions instead, where the code jumps at an offset from wherever it was jumping from.
To enable this on the compiler, the flag -fPIE (position independent executable) is used. Or, if building just a library and not a full executable program, the flag -fPIC (position independent code) is used.
Then, when linking a program composed of compiled files and libraries, the flag -pie is used. In other words, use -pie -fPIE when compiling executables, and -fPIC when compiling for libraries.
When compiled this way, exploits will no longer be able to jump directly into known locations for code.
ASLR for libraries
The above paper glossed over details about ASLR, probably just looking at whether an executable program was compiled to be position independent. However, code links to shared libraries that may or may not likewise be position independent, regardless of the settings for the main executable.
I’m not sure it matters for the current paper, as most programs had position independence disabled, but in the future, a comprehensive study will need to look at libraries as a separate case.
ASLR for other segments
The above paper equated ASLR with randomized location for code, but ASLR also applies to the heap and stack. The randomization status of these programs is independent of whatever was configured for the main executable.
As far as I can tell, modern Linux systems will randomize these locations, regardless of build settings. Thus, for build settings, it just code randomization that needs to be worried about. But when running the software, care must be taken that the operating system will behave correctly. A lot of devices, especially old ones, use older versions of Linux that may not have this randomization enabled, or be using custom kernels where it has been turned off.
- figure out where the stack is located (mitigated by ASLR)
- overwrite the stack frame control structure (mitigated by stack guards)
- execute code in the buffer
This open can be set with -Wl,-z,noexecstack, when compiling both the executable and the libraries. This is the default, so you shouldn’t need to do anything special. However, as the paper points out, there are things that get in the way of this if you aren’t careful. The setting is more what you’d call “guidelines” than actual “rules”. Despite setting this flag, building software may result in an executable stack.
So, you may want to verify it after building software, such as using the “readelf -l [programname]”. This will tell you what the stack has been configured to be.
Format string bugs
-Wformat -Wformat-security -Werror=format-security
Warnings and static analysis
What about sanitizers?
-Wall -Wformat -Wformat-security -Werror=format-security -fstack-protector -pie -fPIE -D_FORTIFY_SOURCE=2 -O2 -Wl,-z,relro -Wl,-z,now -Wl,-z,noexecstack
If you are more paranoid, these options would be:
-Wall -Wformat -Wformat-security -Wstack-protector -Werror -pedantic -fstack-protector-all –param ssp-buffer-size=1 -pie -fPIE -D_FORTIFY_SOURCE=2 -O1 -Wl,-z,relro -Wl,-z,now -Wl,-z,noexecstack
Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/12/notes-about-hacking-with-drop-tools.html
In this report, Kasperky found Eastern European banks hacked with Raspberry Pis and “Bash Bunnies” (DarkVishnya). I thought I’d write up some more detailed notes on this.
A common hacking/pen-testing technique is to drop a box physically on the local network. On this blog, there are articles going back 10 years discussing this. In the old days, this was done with $200 “netbook” (cheap notebook computers). These days, it can be done with $50 “Raspberry Pi” computers, or even $25 consumer devices reflashed with Linux.
A “Raspberry Pi” is a $35 single board computer, for which you’ll need to add about another $15 worth of stuff to get it running (power supply, flash drive, and cables). These are extremely popular hobbyist computers that are used everywhere from home servers, robotics, and hacking. They have spawned a large number of clones, like the ODROID, Orange Pi, NanoPi, and so on. With a quad-core, 1.4 GHz, single-issue processor, 2 gigs of RAM, and typically at least 8 gigs of flash, these are pretty powerful computers.
Typically what you’d do is install Kali Linux. This is a Linux “distro” that contains all the tools hackers want to use.
You then drop this box physically on the victim’s network. We often called these “dropboxes” in the past, but now that there’s a cloud service called “Dropbox”, this becomes confusing, so I guess we can call them “drop tools”. The advantage of using something like a Raspberry Pi is that it’s cheap: once dropped on a victim’s network, you probably won’t ever get it back again.
Gaining physical access to even secure banks isn’t that hard. Sure, getting to the money is tightly controlled, but other parts of the bank aren’t not nearly as secure. One good trick is to pretend to be a banking inspector. At least in the United States, they’ll quickly bend over an spread them if they think you are a regulator. Or, you can pretend to be maintenance worker there to fix the plumbing. All it takes is a uniform with a logo and what appears to be a valid work order. If questioned, whip out the clipboard and ask them to sign off on the work. Or, if all else fails, just walk in brazenly as if you belong.
Once inside the physical network, you need to find a place to plug something in. Ethernet and power plugs are often underneath/behind furniture, so that’s not hard. You might find access to a wiring closet somewhere, as Aaron Swartz famously did. You’ll usually have to connect via Ethernet, as it requires no authentication/authorization. If you could connect via WiFi, you could probably do it outside the building using directional antennas without going through all this.
Now that you’ve got your evil box installed, there is the question of how you remotely access it. It’s almost certainly firewalled, preventing any inbound connection.
One choice is to configure it for outbound connections. When doing pentests, I configure reverse SSH command-prompts to a command-and-control server. Another alternative is to create a SSH Tor hidden service. There are a myriad of other ways you might do this. They all suffer the problem that anybody looking at the organization’s outbound traffic can notice these connections.
Another alternative is to use the WiFi. This allows you to physically sit outside in the parking lot and connect to the box. This can sometimes be detected using WiFi intrusion prevention systems, though it’s not hard to get around that. The downside is that it puts you in some physical jeopardy, because you have to be physically near the building. However, you can mitigate this in some cases, such as sticking a second Raspberry Pi in a nearby bar that is close enough to connection, and then use the bar’s Internet connection to hop-scotch on in.
The third alternative, which appears to be the one used in the article above, is to use a 3G/4G modem. You can get such modems for another $15 to $30. You can get “data only” plans, especially through MVNOs, for around $1 to $5 a month, especially prepaid plans that require no identification. These are “low bandwidth” plans designed for IoT command-and-control where only a few megabytes are transferred per month, which is perfect for command-line access to these drop tools.
With all this, you are looking at around $75 for the hardware, software, and 3G/4G plan for a year to remotely connect to a box on the target network.
As an alternative, you might instead use a cheap consumer router reflashed with the OpenWRT Linux distro. A good example would be a Gl.INET device for $19. This a cheap Chinese manufacturer that makes cheap consumer routers designed specifically for us hackers who want to do creative things with them.
The benefit of such devices is that they look like the sorts of consumer devices that one might find on a local network. Raspberry Pi devices stand out as something suspicious, should they ever be discovered, but a reflashed consumer device looks trustworthy.
The problem with these devices is that they are significantly less powerful than a Raspberry Pi. The typical processor is usually single core around 500 MHz, and the typical memory is only around 32 to 128 megabytes. Moreover, while many hacker tools come precompiled for OpenWRT, you’ll end up having to build most of the tools yourself, which can be difficult and frustrating.
Once you’ve got your drop tool plugged into the network, then what do you do?
One question is how noisy you want to be, and how good you think the defenders are. The classic thing to do is run a port scanner like nmap or masscan to map out the network. This is extremely noisy and even clueless companies will investigate.
This can be partly mitigated by spoofing your MAC and IP addresses. However, a properly run network will still be able to track back the addresses to the proper port switch. Therefore, you might want to play with a bunch of layer 2 things. For example, passively watch for devices that get turned off a night, then spoof their MAC address during your night time scans, so that when they come back in the morning, they’ll trace it back to the wrong device causing the problem.
An easier thing is to passively watch what’s going on. In purely passive mode, they really can’t detect that you exist at all on the network, other than the fact that the switch port reports something connected. By passively looking at ARP packets, you can get a list of all the devices on your local segment. By passively looking at Windows broadcasts, you can map out large parts of what’s going on with Windows. You can also find MacBooks, NAT routers, SIP phones, and so on.
This allows you to then target individual machines rather than causing a lot of noise on the network, and therefore go undetected.
If you’ve got a target machine, the typical procedure is to port scan it with nmap, find the versions of software running that may have known vulnerabilities, then use metasploit to exploit those vulnerabilities. If it’s a web server, then you might use something like burpsuite in order to find things like SQL injection. If it’s a Windows desktop/server, then you’ll start by looking for unauthenticated file shares, man-in-the-middle connections, or exploit it with something like EternalBlue.
The sorts of things you can do is endless, just read any guide on how to use Kali Linux, and follow those examples.
Note that your command-line connection may be a low-bandwidth 3G/4G connection, but when it’s time to exfiltrate data, you’ll probably use the corporate Internet connection to transfer gigabytes of data.
USB hacking tools
The above paper described not only drop tools attached to the network, but also tools attached view USB. This is a wholly separate form of hacking.
According to the description, the hackers used BashBunny, a $100 USB device. It’s a computer than can emulate things like a keyboard.
One set of attacks is through a virtual keyboard and mouse. It can keep causing mouse/keyboard activity invisibly in the background to avoid the automatic lockout, then presumably at night, run commands that will download and run evil scripts. A good example is the “fileless PowerShell” scripts mentioned in the article above.
This may be combined with emulation of a flash drive. In the old days, hostile flash drives could directly infect a Windows computer once plugged in. These days, that won’t happen without interaction by the user — interaction using a keyboard/mouse, which the device can also emulate.
Another set of attacks is pretending to be a USB Ethernet connection. This allows network attacks, such as those mentioned above, to travel across the USB port, without being detectable on the real network. It also allows additional tricks. For example, it can configure itself to be the default route for Internet (rather than local) access, redirecting all web access to a hostile device on the Internet. In other words, the device will usually be limited in that it doesn’t itself have access to the Internet, but it can confuse the network configuration of the Windows device to cause other bad effects.
Another creative use is to emulate a serial port. This works for a lot of consumer devices and things running Linux. This will get you a shell directly on the device, or a login that accepts a default or well-known backdoor password. This is a widespread vulnerability because it’s so unexpected.
In theory, any USB device could be emulated. Today’s Windows, Linux, and macOS machines have a lot of device drivers that are full of vulnerabilities that an be exploited. However, I don’t see any easy to use hacking toolkits that’ll make this easy for you, so this is still mostly just theoretical.
Every security professional should have experience with this. Whether it’s actually a Raspberry Pi or just a VM on a laptop running Kali, security professionals should have experience with this. They should run nmap on their network, they should run burpsuite on their intranet websites, and so on. Of course, this should only be done with knowledge and permission from their bosses, and ideally, boss’s bosses.