All posts by Bruce Schneier

Humble Bundle’s 2020 Cybersecurity Books

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/humble_bundles_.html

For years, Humble Bundle has been selling great books at a “pay what you can afford” model. This month, they’re featuring as many as nineteen cybersecurity books for as little as $1, including four of mine. These are digital copies, all DRM-free. Part of the money goes to support the EFF or Let’s Encrypt. (The default is 15%, and you can change that.) Ss an EFF board member, I know that we’ve received a substantial amount from this program in previous years.

Deep Learning to Find Malicious Email Attachments

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/deep_learning_t.html

Google presented its system of using deep-learning techniques to identify malicious email attachments:

At the RSA security conference in San Francisco on Tuesday, Google’s security and anti-abuse research lead Elie Bursztein will present findings on how the new deep-learning scanner for documents is faring against the 300 billion attachments it has to process each week. It’s challenging to tell the difference between legitimate documents in all their infinite variations and those that have specifically been manipulated to conceal something dangerous. Google says that 63 percent of the malicious documents it blocks each day are different than the ones its systems flagged the day before. But this is exactly the type of pattern-recognition problem where deep learning can be helpful.

[…]

The document analyzer looks for common red flags, probes files if they have components that may have been purposefully obfuscated, and does other checks like examining macros­ — the tool in Microsoft Word documents that chains commands together in a series and is often used in attacks. The volume of malicious documents that attackers send out varies widely day to day. Bursztein says that since its deployment, the document scanner has been particularly good at flagging suspicious documents sent in bursts by malicious botnets or through other mass distribution methods. He was also surprised to discover how effective the scanner is at analyzing Microsoft Excel documents, a complicated file format that can be difficult to assess.

This is the sort of thing that’s pretty well optimized for machine-learning techniques.

Securing the Internet of Things through Class-Action Lawsuits

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/securing_the_in.html

This law journal article discusses the role of class-action litigation to secure the Internet of Things.

Basically, the article postulates that (1) market realities will produce insecure IoT devices, and (2) political failures will leave that industry unregulated. Result: insecure IoT. It proposes proactive class action litigation against manufacturers of unsafe and unsecured IoT devices before those devices cause unnecessary injury or death. It’s a lot to read, but it’s an interesting take on how to secure this otherwise disastrously insecure world.

And it was inspired by my book, Click Here to Kill Everybody.

Firefox Enables DNS over HTTPS

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/firefox_enables.html

This is good news:

Whenever you visit a website — even if it’s HTTPS enabled — the DNS query that converts the web address into an IP address that computers can read is usually unencrypted. DNS-over-HTTPS, or DoH, encrypts the request so that it can’t be intercepted or hijacked in order to send a user to a malicious site.

[…]

But the move is not without controversy. Last year, an internet industry group branded Mozilla an “internet villain” for pressing ahead the security feature. The trade group claimed it would make it harder to spot terrorist materials and child abuse imagery. But even some in the security community are split, amid warnings that it could make incident response and malware detection more difficult.

The move to enable DoH by default will no doubt face resistance, but browser makers have argued it’s not a technology that browser makers have shied away from. Firefox became the first browser to implement DoH — with others, like Chrome, Edge, and Opera — quickly following suit.

I think DoH is a great idea, and long overdue.

Slashdot thread. Tech details here. And here’s a good summary of the criticisms.

Russia Is Trying to Tap Transatlantic Cables

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/russia_is_tryin.html

The Times of London is reporting that Russian agents are in Ireland probing transatlantic communications cables.

Ireland is the landing point for undersea cables which carry internet traffic between America, Britain and Europe. The cables enable millions of people to communicate and allow financial transactions to take place seamlessly.

Garda and military sources believe the agents were sent by the GRU, the military intelligence branch of the Russian armed forces which was blamed for the nerve agent attack in Britain on Sergei Skripal, a former Russian intelligence officer.

This is nothing new. The NSA and GCHQ have been doing this for decades.

Boing Boing post.

Friday Squid Blogging: 13-foot Giant Squid Caught off New Zealand Coast

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/friday_squid_bl_717.html

It’s probably a juvenile:

Researchers aboard the New Zealand-based National Institute of Water and Atmospheric Research Ltd (NIWA) research vessel Tangaroa were on an expedition to survey hoki, New Zealand’s most valuable commercial fish, in the Chatham Rise ­ an area of ocean floor to the east of New Zealand that makes up part of the “lost continent” of Zealandia.

At 7.30am on the morning of January 21, scientists were hauling up their trawler net from a depth of 442 meters (1,450 feet) when they were surprised to spot tentacles in amongst their catch. Large tentacles.

According to voyage leader and NIWA fisheries scientist Darren Stevens, who was on watch, it took six members of staff to lift the giant squid out of the net. Despite the squid being 4 meters long and weighing about 110 kilograms (240 pounds), Stevens said he thought the squid was “on the smallish side,” compared to other behemoths caught.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Inrupt, Tim Berners-Lee’s Solid, and Me

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/inrupt_tim_bern.html

For decades, I have been talking about the importance of individual privacy. For almost as long, I have been using the metaphor of digital feudalism to describe how large companies have become central control points for our data. And for maybe half a decade, I have been talking about the world-sized robot that is the Internet of Things, and how digital security is now a matter of public safety. And most recently, I have been writing and speaking about how technologists need to get involved with public policy.

All of this is a long-winded way of saying that I have joined a company called Inrupt that is working to bring Tim Berners-Lee’s distributed data ownership model that is Solid into the mainstream. (I think of Inrupt basically as the Red Hat of Solid.) I joined the Inrupt team last summer as its Chief of Security Architecture, and have been in stealth mode until now.

The idea behind Solid is both simple and extraordinarily powerful. Your data lives in a pod that is controlled by you. Data generated by your things — your computer, your phone, your IoT whatever — is written to your pod. You authorize granular access to that pod to whoever you want for whatever reason you want. Your data is no longer in a bazillion places on the Internet, controlled by you-have-no-idea-who. It’s yours. If you want your insurance company to have access to your fitness data, you grant it through your pod. If you want your friends to have access to your vacation photos, you grant it through your pod. If you want your thermostat to share data with your air conditioner, you give both of them access through your pod.

The ideal would be for this to be completely distributed. Everyone’s pod would be on a computer they own, running on their network. But that’s not how it’s likely to be in real life. Just as you can theoretically run your own email server but in reality you outsource it to Google or whoever, you are likely to outsource your pod to those same sets of companies. But maybe pods will come standard issue in home routers. Even if you do hand your pod over to some company, it’ll be like letting them host your domain name or manage your cell phone number. If you don’t like what they’re doing, you can always move your pod — just like you can take your cell phone number and move to a different carrier. This will give users a lot more power.

I believe this will fundamentally alter the balance of power in a world where everything is a computer, and everything is producing data about you. Either IoT companies are going to enter into individual data sharing agreements, or they’ll all use the same language and protocols. Solid has a very good chance of being that protocol. And security is critical to making all of this work. Just trying to grasp what sort of granular permissions are required, and how the authentication flows might work, is mind-altering. We’re stretching pretty much every Internet security protocol to its limits and beyond just setting this up.

Building a secure technical infrastructure is largely about policy, but there’s also a wave of technology that can shift things in one direction or the other. Solid is one of those technologies. It moves the Internet away from overly-centralized power of big corporations and governments and towards more rational distributions of power; greater liberty, better privacy, and more freedom for everyone.

I’ve worked with Inrupt’s CEO, John Bruce, at both of my previous companies: Counterpane and Resilient. It’s a little weird working for a start-up that is not a security company. (While security is essential to making Solid work, the technology is fundamentally about the functionality.) It’s also a little surreal working on a project conceived and spearheaded by Tim Berners-Lee. But at this point, I feel that I should only work on things that matter to society. So here I am.

Whatever happens next, it’s going to be a really fun ride.

EDITED TO ADD (2/23): News article. HackerNews thread.

EDITED TO ADD (2/25): More press coverage.

Policy vs Technology

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/policy_vs_techn.html

Sometime around 1993 or 1994, during the first Crypto Wars, I was part of a group of cryptography experts that went to Washington to advocate for strong encryption. Matt Blaze and Ron Rivest were with me; I don’t remember who else. We met with then Massachusetts Representative Ed Markey. (He didn’t become a senator until 2013.) Back then, he and Vermont Senator Patrick Leahy were the most knowledgeable on this issue and our biggest supporters against government backdoors. They still are.

Markey was against forcing encrypted phone providers to implement the NSA’s Clipper Chip in their devices, but wanted us to reach a compromise with the FBI regardless. This completely startled us techies, who thought having the right answer was enough. It was at that moment that I learned an important difference between technologists and policy makers. Technologists want solutions; policy makers want consensus.

Since then, I have become more immersed in policy discussions. I have spent more time with legislators, advised advocacy organizations like EFF and EPIC, and worked with policy-minded think tanks in the United States and around the world. I teach cybersecurity policy and technology at the Harvard Kennedy School of Government. My most recent two books, Data and Goliath — about surveillance — and Click Here to Kill Everybody — about IoT security — are really about the policy implications of technology.

Over that time, I have observed many other differences between technologists and policy makers — differences that we in cybersecurity need to understand if we are to translate our technological solutions into viable policy outcomes.

Technologists don’t try to consider all of the use cases of a given technology. We tend to build something for the uses we envision, and hope that others can figure out new and innovative ways to extend what we created. We love it when there is a new use for a technology that we never considered and that changes the world. And while we might be good at security around the use cases we envision, we are regularly blindsided when it comes to new uses or edge cases. (Authentication risks surrounding someone’s intimate partner is a good example.)

Policy doesn’t work that way; it’s specifically focused on use. It focuses on people and what they do. Policy makers can’t create policy around a piece of technology without understanding how it is used — how all of it’s used.

Policy is often driven by exceptional events, like the FBI’s desire to break the encryption on the San Bernardino shooter’s iPhone. (The PATRIOT Act is the most egregious example I can think of.) Technologists tend to look at more general use cases, like the overall value of strong encryption to societal security. Policy tends to focus on the past, making existing systems work or correcting wrongs that have happened. It’s hard to imagine policy makers creating laws around VR systems, because they don’t yet exist in any meaningful way. Technology is inherently future focused. Technologists try to imagine better systems, or future flaws in present systems, and work to improve things.

As technologists, we iterate. It’s how we write software. It’s how we field products. We know we can’t get it right the first time, so we have developed all sorts of agile systems to deal with that fact. Policy making is often the opposite. U.S. federal laws take months or years to negotiate and pass, and after that the issue doesn’t get addressed again for a decade or more. It is much more critical to get it right the first time, because the effects of getting it wrong are long lasting. (See, for example, parts of the GDPR.) Sometimes regulatory agencies can be more agile. The courts can also iterate policy, but it’s slower.

Along similar lines, the two groups work in very different time frames. Engineers, conditioned by Moore’s law, have long thought of 18 months as the maximum time to roll out a new product, and now think in terms of continuous deployment of new features. As I said previously, policy makers tend to think in terms of multiple years to get a law or regulation in place, and then more years as the case law builds up around it so everyone knows what it really means. It’s like tortoises and hummingbirds.

Technology is inherently global. It is often developed with local sensibilities according to local laws, but it necessarily has global reach. Policy is always jurisdictional. This difference is causing all sorts of problems for the global cloud services we use every day. The providers are unable to operate their global systems in compliance with more than 200 different — and sometimes conflicting — national requirements. Policy makers are often unimpressed with claims of inability; laws are laws, they say, and if Facebook can translate its website into French for the French, it can also implement their national laws.

Technology and policy both use concepts of trust, but differently. Technologists tend to think of trust in terms of controls on behavior. We’re getting better — NIST’s recent work on trust is a good example — but we have a long way to go. For example, Google’s Trust and Safety Department does a lot of AI and ethics work largely focused on technological controls. Policy makers think of trust in more holistic societal terms: trust in institutions, trust as the ability not to worry about adverse outcomes, consumer confidence. This dichotomy explains how techies can claim bitcoin is trusted because of the strong cryptography, but policy makers can’t imagine calling a system trustworthy when you lose all your money if you forget your encryption key.

Policy is how society mediates how individuals interact with society. Technology has the potential to change how individuals interact with society. The conflict between these two causes considerable friction, as technologists want policy makers to get out of the way and not stifle innovation, and policy makers want technologists to stop moving fast and breaking so many things.

Finally, techies know that code is law­ — that the restrictions and limitations of a technology are more fundamental than any human-created legal anything. Policy makers know that law is law, and tech is just tech. We can see this in the tension between applying existing law to new technologies and creating new law specifically for those new technologies.

Yes, these are all generalizations and there are exceptions. It’s also not all either/or. Great technologists and policy makers can see the other perspectives. The best policy makers know that for all their work toward consensus, they won’t make progress by redefining pi as three. Thoughtful technologists look beyond the immediate user demands to the ways attackers might abuse their systems, and design against those adversaries as well. These aren’t two alien species engaging in first contact, but cohorts who can each learn and borrow tools from the other. Too often, though, neither party tries.

In October, I attended the first ACM Symposium on Computer Science and the Law. Google counsel Brian Carver talked about his experience with the few computer science grad students who would attend his Intellectual Property and Cyberlaw classes every year at UC Berkeley. One of the first things he would do was give the students two different cases to read. The cases had nearly identical facts, and the judges who’d ruled on them came to exactly opposite conclusions. The law students took this in stride; it’s the way the legal system works when it’s wrestling with a new concept or idea. But it shook the computer science students. They were appalled that there wasn’t a single correct answer.

But that’s not how law works, and that’s not how policy works. As the technologies we’re creating become more central to society, and as we in technology continue to move into the public sphere and become part of the increasingly important policy debates, it is essential that we learn these lessons. Gone are the days when we were creating purely technical systems and our work ended at the keyboard and screen. Now we’re building complex socio-technical systems that are literally creating a new world. And while it’s easy to dismiss policy makers as doing it wrong, it’s important to understand that they’re not. Policy making has been around a lot longer than the Internet or computers or any technology. And the essential challenges of this century will require both groups to work together.

This essay previously appeared in IEEE Security & Privacy.

Hacking McDonald’s for Free Food

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/hacking_mcdonal.html

This hack was possible because the McDonald’s app didn’t authenticate the server, and just did whatever the server told it to do:

McDonald’s receipts in Germany end with a link to a survey page. Once you take the survey, you receive a coupon code for a free small beverage, redeemable within a month. One day, David happened to be checking out how the website’s coding was structured when he noticed that the information triggering the server to issue a new voucher was always the same. That meant he could build a programme replicating the code, as if someone was taking the survey again and again.

[…]

At the McDonald’s in East Berlin, David began the demonstration by setting up an internet hotspot with his smartphone. Lenny connected with a second phone and a laptop, then turned the laptop into a proxy server connected to both phones. He opened the McDonald’s app and entered a voucher code generated by David’s programme. The next step was ordering the food for a total of €17. The bill on the app was transmitted to the laptop, which set all prices to zero through a programme created by Lenny, and sent the information back to the app. After tapping “Complete and pay 0.00 euros”, we simply received our pick-up number. It had worked.

The flaw was fixed late last year.

Voatz Internet Voting App Is Insecure

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/voatz_internet_.html

This paper describes the flaws in the Voatz Internet voting app: “The Ballot is Busted Before the Blockchain: A Security Analysis of Voatz, the First Internet Voting Application Used in U.S. Federal Elections.”

Abstract: In the 2018 midterm elections, West Virginia became the first state in the U.S. to allow select voters to cast their ballot on a mobile phone via a proprietary app called “Voatz.” Although there is no public formal description of Voatz’s security model, the company claims that election security and integrity are maintained through the use of a permissioned blockchain, biometrics, a mixnet, and hardware-backed key storage modules on the user’s device. In this work, we present the first public security analysis of Voatz, based on a reverse engineering of their Android application and the minimal available documentation of the system. We performed a clean-room reimplementation of Voatz’s server and present an analysis of the election process as visible from the app itself.

We find that Voatz has vulnerabilities that allow different kinds of adversaries to alter, stop, or expose a user’s vote,including a sidechannel attack in which a completely passive network adversary can potentially recover a user’s secret ballot. We additionally find that Voatz has a number of privacy issues stemming from their use of third party services for crucial app functionality. Our findings serve as a concrete illustration of the common wisdom against Internet voting,and of the importance of transparency to the legitimacy of elections.

News articles.

The company’s response is a perfect illustration of why non-computer non-security companies have no idea what they’re doing, and should not be trusted with any form of security.

Upcoming Speaking Engagements

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/upcoming_speaki_12.html

This is a current list of where and when I am scheduled to speak:

  • I’ll be at RSA Conference 2020 in San Francisco. On Wednesday, February 26, at 2:50 PM, I’ll be part of a panel on “How to Reduce Supply Chain Risk: Lessons from Efforts to Block Huawei.” On Thursday, February 27, at 9:20 AM, I’m giving a keynote on “Hacking Society.”
  • I’m speaking at SecIT by Heise in Hannover, Germany on March 26, 2020.

The list is maintained on this page.

DNSSEC Keysigning Ceremony Postponed Because of Locked Safe

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/dnssec_keysigni.html

Interesting collision of real-world and Internet security:

The ceremony sees several trusted internet engineers (a minimum of three and up to seven) from across the world descend on one of two secure locations — one in El Segundo, California, just south of Los Angeles, and the other in Culpeper, Virginia — both in America, every three months.

Once in place, they run through a lengthy series of steps and checks to cryptographically sign the digital key pairs used to secure the internet’s root zone. (Here’s Cloudflare‘s in-depth explanation, and IANA’s PDF step-by-step guide.)

[…]

Only specific named people are allowed to take part in the ceremony, and they have to pass through several layers of security — including doors that can only be opened through fingerprint and retinal scans — before getting in the room where the ceremony takes place.

Staff open up two safes, each roughly one-metre across. One contains a hardware security module that contains the private portion of the KSK. The module is activated, allowing the KSK private key to sign keys, using smart cards assigned to the ceremony participants. These credentials are stored in deposit boxes and tamper-proof bags in the second safe. Each step is checked by everyone else, and the event is livestreamed. Once the ceremony is complete — which takes a few hours — all the pieces are separated, sealed, and put back in the safes inside the secure facility, and everyone leaves.

But during what was apparently a check on the system on Tuesday night — the day before the ceremony planned for 1300 PST (2100 UTC) Wednesday — IANA staff discovered that they couldn’t open one of the two safes. One of the locking mechanisms wouldn’t retract and so the safe stayed stubbornly shut.

As soon as they discovered the problem, everyone involved, including those who had flown in for the occasion, were told that the ceremony was being postponed. Thanks to the complexity of the problem — a jammed safe with critical and sensitive equipment inside — they were told it wasn’t going to be possible to hold the ceremony on the back-up date of Thursday, either.

Companies that Scrape Your Email

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/companies_that_.html

Motherboard has a long article on apps — Edison, Slice, and Cleanfox — that spy on your email by scraping your screen, and then sell that information to others:

Some of the companies listed in the J.P. Morgan document sell data sourced from “personal inboxes,” the document adds. A spokesperson for J.P. Morgan Research, the part of the company that created the document, told Motherboard that the research “is intended for institutional clients.”

That document describes Edison as providing “consumer purchase metrics including brand loyalty, wallet share, purchase preferences, etc.” The document adds that the “source” of the data is the “Edison Email App.”

[…]

A dataset obtained by Motherboard shows what some of the information pulled from free email app users’ inboxes looks like. A spreadsheet containing data from Rakuten’s Slice, an app that scrapes a user’s inbox so they can better track packages or get their money back once a product goes down in price, contains the item that an app user bought from a specific brand, what they paid, and an unique identification code for each buyer.

Crypto AG Was Owned by the CIA

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/crypto_ag_was_o.html

The Swiss cryptography firm Crypto AG sold equipment to governments and militaries around the world for decades after World War II. They were owned by the CIA:

But what none of its customers ever knew was that Crypto AG was secretly owned by the CIA in a highly classified partnership with West German intelligence. These spy agencies rigged the company’s devices so they could easily break the codes that countries used to send encrypted messages.

This isn’t really news. We have long known that Crypto AG was backdooring crypto equipment for the Americans. What is new is the formerly classified documents describing the details:

The decades-long arrangement, among the most closely guarded secrets of the Cold War, is laid bare in a classified, comprehensive CIA history of the operation obtained by The Washington Post and ZDF, a German public broadcaster, in a joint reporting project.

The account identifies the CIA officers who ran the program and the company executives entrusted to execute it. It traces the origin of the venture as well as the internal conflicts that nearly derailed it. It describes how the United States and its allies exploited other nations’ gullibility for years, taking their money and stealing their secrets.

The operation, known first by the code name “Thesaurus” and later “Rubicon,” ranks among the most audacious in CIA history.

EDITED TO ADD: More news articles. And a 1995 story on this. It’s not new news.

Apple’s Tracking-Prevention Feature in Safari has a Privacy Bug

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/apples_tracking.html

Last month, engineers at Google published a very curious privacy bug in Apple’s Safari web browser. Apple’s Intelligent Tracking Prevention, a feature designed to reduce user tracking, has vulnerabilities that themselves allow user tracking. Some details:

ITP detects and blocks tracking on the web. When you visit a few websites that happen to load the same third-party resource, ITP detects the domain hosting the resource as a potential tracker and from then on sanitizes web requests to that domain to limit tracking. Tracker domains are added to Safari’s internal, on-device ITP list. When future third-party requests are made to a domain on the ITP list, Safari will modify them to remove some information it believes may allow tracking the user (such as cookies).

[…]

The details should come as a surprise to everyone because it turns out that ITP could effectively be used for:

  • information leaks: detecting websites visited by the user (web browsing history hijacking, stealing a list of visited sites)
  • tracking the user with ITP, making the mechanism function like a cookie
  • fingerprinting the user: in ways similar to the HSTS fingerprint, but perhaps a bit better

I am sure we all agree that we would not expect a privacy feature meant to protect from tracking to effectively enable tracking, and also accidentally allowing any website out there to steal its visitors’ web browsing history. But web architecture is complex, and the consequence is that this is exactly the case.

Apple fixed this vulnerability in December, a month before Google published.

If there’s any lesson here, it’s that privacy is hard — and that privacy engineering is even harder. It’s not that we shouldn’t try, but we should recognize that it’s easy to get it wrong.