All posts by Bruce Schneier

Alex Stamos on Content Moderation and Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/alex_stamos_on_.html

Former Facebook CISO Alex Stamos argues that increasing political pressure on social media platforms to moderate content will give them a pretext to turn all end-to-end crypto off — which would be more profitable for them and bad for society.

If we ask tech companies to fix ancient societal ills that are now reflected online with moderation, then we will end up with huge, democratically-unaccountable organizations controlling our lives in ways we never intended. And those ills will still exist below the surface.

Why Internet Security Is So Bad

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/why_internet_se.html

I recently read two different essays that make the point that while Internet security is terrible, it really doesn’t affect people enough to make it an issue.

This is true, and is something I worry will change in a world of physically capable computers. Automation, autonomy, and physical agency will make computer security a matter of life and death, and not just a matter of data.

Using a Fake Hand to Defeat Hand-Vein Biometrics

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/using_a_fake_ha.html

Nice work:

One attraction of a vein based system over, say, a more traditional fingerprint system is that it may be typically harder for an attacker to learn how a user’s veins are positioned under their skin, rather than lifting a fingerprint from a held object or high quality photograph, for example.

But with that said, Krissler and Albrecht first took photos of their vein patterns. They used a converted SLR camera with the infrared filter removed; this allowed them to see the pattern of the veins under the skin.

“It’s enough to take photos from a distance of five meters, and it might work to go to a press conference and take photos of them,” Krissler explained. In all, the pair took over 2,500 pictures to over 30 days to perfect the process and find an image that worked.

They then used that image to make a wax model of their hands which included the vein detail.

Slashdot thread.

Security Vulnerabilities in Cell Phone Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/security_vulner_19.html

Good essay on the inherent vulnerabilities in the cell phone standards and the market barriers to fixing them.

So far, industry and policymakers have largely dragged their feet when it comes to blocking cell-site simulators and SS7 attacks. Senator Ron Wyden, one of the few lawmakers vocal about this issue, sent a letter in August encouraging the Department of Justice to “be forthright with federal courts about the disruptive nature of cell-site simulators.” No response has ever been published.

The lack of action could be because it is a big task — there are hundreds of companies and international bodies involved in the cellular network. The other reason could be that intelligence and law enforcement agencies have a vested interest in exploiting these same vulnerabilities. But law enforcement has other effective tools that are unavailable to criminals and spies. For example, the police can work directly with phone companies, serving warrants and Title III wiretap orders. In the end, eliminating these vulnerabilities is just as valuable for law enforcement as it is for everyone else.

As it stands, there is no government agency that has the power, funding and mission to fix the problems. Large companies such as AT&T, Verizon, Google and Apple have not been public about their efforts, if any exist.

Machine Learning to Detect Software Vulnerabilities

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/machine_learnin.html

No one doubts that artificial intelligence (AI) and machine learning (ML) will transform cybersecurity. We just don’t know how, or when. While the literature generally focuses on the different uses of AI by attackers and defenders ­ and the resultant arms race between the two ­ I want to talk about software vulnerabilities.

All software contains bugs. The reason is basically economic: The market doesn’t want to pay for quality software. With a few exceptions, such as the space shuttle, the market prioritizes fast and cheap over good. The result is that any large modern software package contains hundreds or thousands of bugs.

Some percentage of bugs are also vulnerabilities, and a percentage of those are exploitable vulnerabilities, meaning an attacker who knows about them can attack the underlying system in some way. And some percentage of those are discovered and used. This is why your computer and smartphone software is constantly being patched; software vendors are fixing bugs that are also vulnerabilities that have been discovered and are being used.

Everything would be better if software vendors found and fixed all bugs during the design and development process, but, as I said, the market doesn’t reward that kind of delay and expense. AI, and machine learning in particular, has the potential to forever change this trade-off.

The problem of finding software vulnerabilities seems well-suited for ML systems. Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already a healthy amount of academic literature on the topic — and research is continuing. There’s every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.

Finding vulnerabilities can benefit both attackers and defenders, but it’s not a fair fight. When an attacker’s ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender’s ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.

But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.

Fast-forward a decade or so into the future. We might say to each other, “Remember those years when software vulnerabilities were a thing, before ML vulnerability finders were built into every compiler and fixed them before the software was ever released? Wow, those were crazy years.” Not only is this future possible, but I would bet on it.

Getting from here to there will be a dangerous ride, though. Those vulnerability finders will first be unleashed on existing software, giving attackers hundreds if not thousands of vulnerabilities to exploit in real-world attacks. Sure, defenders can use the same systems, but many of today’s Internet of Things systems have no engineering teams to write patches and no ability to download and install patches. The result will be hundreds of vulnerabilities that attackers can find and use.

But if we look far enough into the horizon, we can see a future where software vulnerabilities are a thing of the past. Then we’ll just have to worry about whatever new and more advanced attack techniques those AI systems come up with.

This essay previously appeared on SecurityIntelligence.com.

New Attack Against Electrum Bitcoin Wallets

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/new_attack_agai_3.html

This is clever:

How the attack works:

  • Attacker added tens of malicious servers to the Electrum wallet network.
  • Users of legitimate Electrum wallets initiate a Bitcoin transaction.
  • If the transaction reaches one of the malicious servers, these servers reply with an error message that urges users to download a wallet app update from a malicious website (GitHub repo).
  • User clicks the link and downloads the malicious update.
  • When the user opens the malicious Electrum wallet, the app asks the user for a two-factor authentication (2FA) code. This is a red flag, as these 2FA codes are only requested before sending funds, and not at wallet startup.
  • The malicious Electrum wallet uses the 2FA code to steal the user’s funds and transfer them to the attacker’s Bitcoin addresses.

The problem here is that Electrum servers are allowed to trigger popups with custom text inside users’ wallets.

Friday Squid Blogging: Squid-Focused Menus in Croatia

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/friday_squid_bl_657.html

This is almost over:

From 1 December 2018 — 6 January 2019, Days of Adriatic squid will take place at restaurants all over north-west Istria. Restaurants will be offering affordable full-course menus based on Adriatic squid, combined with quality local olive oil and fine wines.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Click Here to Kill Everybody Available as an Audiobook

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/click_here_to_k_2.html

Click Here to Kill Everybody is finally available on Audible.com. I have ten download codes. Not having anything better to do with them, here they are:

  1. HADQSSFC98WCQ
  2. LDLMC6AJLBDJY
  3. YWSY8CXYMQNJ6
  4. JWM7SGNUXX7DB
  5. UPKAJ6MHB2LEF
  6. M85YN36UR926H
  7. 9ULE4NFAH2SLF
  8. GU7A79GSDCXAT
  9. 9K8Q4RX6DKL84
  10. M92GB246XY7JN

Congratulations to the first ten people to try to use them.

EDITED TO ADD (12/30): All the codes are long gone.

Massive Ad Fraud Scheme Relied on BGP Hijacking

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/massive_ad_frau.html

This is a really interesting story of an ad fraud scheme that relied on hijacking the Border Gateway Protocol:

Members of 3ve (pronounced “eve”) used their large reservoir of trusted IP addresses to conceal a fraud that otherwise would have been easy for advertisers to detect. The scheme employed a thousand servers hosted inside data centers to impersonate real human beings who purportedly “viewed” ads that were hosted on bogus pages run by the scammers themselves­ — who then received a check from ad networks for these billions of fake ad impressions. Normally, a scam of this magnitude coming from such a small pool of server-hosted bots would have stuck out to defrauded advertisers. To camouflage the scam, 3ve operators funneled the servers’ fraudulent page requests through millions of compromised IP addresses.

About one million of those IP addresses belonged to computers, primarily based in the US and the UK, that attackers had infected with botnet software strains known as Boaxxe and Kovter. But at the scale employed by 3ve, not even that number of IP addresses was enough. And that’s where the BGP hijacking came in. The hijacking gave 3ve a nearly limitless supply of high-value IP addresses. Combined with the botnets, the ruse made it seem like millions of real people from some of the most affluent parts of the world were viewing the ads.

Lots of details in the article.

An aphorism I often use in my talks is “expertise flows downhill: today’s top-secret NSA programs become tomorrow’s PhD theses and the next day’s hacking tools.” This is an example of that. BGP hacking — known as “traffic shaping” inside the NSA — has long been a tool of national intelligence agencies. Now it is being used by cybercriminals.

EDITED TO ADD (1/2): Classified NSA presentation on “network shaping.” I don’t know if there is a difference inside the NSA between the two terms.

Human Rights by Design

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/human_rights_by.html

Good essay: “Advancing Human-Rights-By-Design In The Dual-Use Technology Industry,” by Jonathon Penney, Sarah McKune, Lex Gill, and Ronald J. Deibert:

But businesses can do far more than these basic measures. They could adopt a “human-rights-by-design” principle whereby they commit to designing tools, technologies, and services to respect human rights by default, rather than permit abuse or exploitation as part of their business model. The “privacy-by-design” concept has gained currency today thanks in part to the European Union General Data Protection Regulation (GDPR), which requires it. The overarching principle is that companies must design products and services with the default assumption that they protect privacy, data, and information of data subjects. A similar human-rights-by-design paradigm, for example, would prevent filtering companies from designing their technology with features that enable large-scale, indiscriminate, or inherently disproportionate censorship capabilities­ — like the Netsweeper feature that allows an ISP to block entire country top level domains (TLDs). DPI devices and systems could be configured to protect against the ability of operators to inject spyware in network traffic or redirect users to malicious code rather than facilitate it. And algorithms incorporated into the design of communications and storage platforms could account for human rights considerations in addition to business objectives. Companies could also join multi-stakeholder efforts like the Global Network Initiative (GNI), through which technology companies (including Google, Microsoft, and Yahoo) have taken the first step toward principles like transparency, privacy, and freedom of expression, as well as to self-reporting requirements and independent compliance assessments.

Glitter Bomb against Package Thieves

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/glitter_bomb_ag.html

Stealing packages from unattended porches is a rapidly rising crime, as more of us order more things by mail. One person hid a glitter bomb and a video recorder in a package, posting the results when thieves opened the box. At least, that’s what might have happened. At least some of the video was faked, which puts the whole thing into question.

That’s okay, though. Santa is faked, too. Happy whatever you’re celebrating.

MD5 and SHA-1 Still Used in 2018

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/md5_and_sha-1_s.html

Last week, the Scientific Working Group on Digital Evidence published a draft document — “SWGDE Position on the Use of MD5 and SHA1 Hash Algorithms in Digital and Multimedia Forensics” — where it accepts the use of MD5 and SHA-1 in digital forensics applications:

While SWGDE promotes the adoption of SHA2 and SHA3 by vendors and practitioners, the MD5 and SHA1 algorithms remain acceptable for integrity verification and file identification applications in digital forensics. Because of known limitations of the MD5 and SHA1 algorithms, only SHA2 and SHA3 are appropriate for digital signatures and other security applications.

This is technically correct: the current state of cryptanalysis against MD5 and SHA-1 allows for collisions, but not for pre-images. Still, it’s really bad form to accept these algorithms for any purpose. I’m sure the group is dealing with legacy applications, but I would like it to really push those application vendors to update their hash functions.