All posts by Bruce Schneier

Mail Fishing

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/mail_fishing.html

Not email, paper mail:

Thieves, often at night, use string to lower glue-covered rodent traps or bottles coated with an adhesive down the chute of a sidewalk mailbox. This bait attaches to the envelopes inside, and the fish in this case — mail containing gift cards, money orders or checks, which can be altered with chemicals and cashed — are reeled out slowly.

In response, the US Post Office is introducing a more secure mailbox:

The mail slots are only large enough for letters, meaning sending even small packages will require a trip to the post office. The opening is also equipped with a mechanism that grabs at a letter once inserted, making it difficult to retract.

The crime has become more common in the past few years.

Friday Squid Blogging: New Research on Squid Camouflage

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/friday_squid_bl_669.html

From the New York Times:

Now, a paper published last week in Nature Communications suggests that their chromatophores, previously thought to be mainly pockets of pigment embedded in their skin, are also equipped with tiny reflectors made of proteins. These reflectors aid the squid to produce such a wide array of colors, including iridescent greens and blues, within a second of passing in front of a new background. The research reveals that by using tricks found in other parts of the animal kingdom — like shimmering butterflies and peacocks — squid are able to combine multiple approaches to produce their vivid camouflage.

Researchers studied Doryteuthis pealeii, or the longfin squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

First Look Media Shutting Down Access to Snowden NSA Archives

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/first_look_medi.html

The Daily Beast is reporting that First Look Media — home of The Intercept and Glenn Greenwald — is shutting down access to the Snowden archives.

The Intercept was the home for Greenwald’s subset of Snowden’s NSA documents since 2014, after he parted ways with the Guardian the year before. I don’t know the details of how the archive was stored, but it was offline and well secured — and it was available to journalists for research purposes. Many stories were published based on those archives over the years, albeit fewer in recent years.

The article doesn’t say what “shutting down access” means, but my guess is that it means that First Look Media will no longer make the archive available to outside journalists, and probably not to staff journalists, either. Reading between the lines, I think they will delete what they have.

This doesn’t mean that we’re done with the documents. Glenn Greenwald tweeted:

Both Laura & I have full copies of the archives, as do others. The Intercept has given full access to multiple media orgs, reporters & researchers. I’ve been looking for the right partner — an academic institution or research facility — that has the funds to robustly publish.

I’m sure there are still stories in those NSA documents, but with many of them a decade or more old, they are increasingly history and decreasingly current events. Every capability discussed in the documents needs to be read with a “and then they had ten years to improve this” mentality.

Eventually it’ll all become public, but not before it is 100% history and 0% current events.

Zipcar Disruption

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/zipcar_disrupti.html

This isn’t a security story, but it easily could have been. Last Saturday, Zipcar had a system outage: “an outage experienced by a third party telecommunications vendor disrupted connections between the company’s vehicles and its reservation software.”

That didn’t just mean people couldn’t get cars they reserved. Sometimes is meant they couldn’t get the cars they were already driving to work:

Andrew Jones of Roxbury was stuck on hold with customer service for at least a half-hour while he and his wife waited inside a Zipcar that would not turn back on after they stopped to fill it up with gas.

“We were just waiting and waiting for the call back,” he said.

Customers in other states, including New York, California, and Oregon, reported a similar problem. One user who tweeted about issues with a Zipcar vehicle listed his location as Toronto.

Some, like Jones, stayed with the inoperative cars. Others, including Tina Penman in Portland, Ore., and Heather Reid in Cambridge, abandoned their Zipcar. Penman took an Uber home, while Reid walked from the grocery store back to her apartment.

This is a reliability issue that turns into a safety issue. Systems that touch the direct physical world like this need better fail-safe defaults.

An Argument that Cybersecurity Is Basically Okay

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/an_argument_tha.html

Andrew Odlyzko’s new essay is worth reading — “Cybersecurity is not very important“:

Abstract: There is a rising tide of security breaches. There is an even faster rising tide of hysteria over the ostensible reason for these breaches, namely the deficient state of our information infrastructure. Yet the world is doing remarkably well overall, and has not suffered any of the oft-threatened giant digital catastrophes. This continuing general progress of society suggests that cyber security is not very important. Adaptations to cyberspace of techniques that worked to protect the traditional physical world have been the main means of mitigating the problems that occurred. This “chewing gum and baling wire”approach is likely to continue to be the basic method of handling problems that arise, and to provide adequate levels of security.

I am reminded of these two essays. And, as I said in the blog post about those two essays:

This is true, and is something I worry will change in a world of physically capable computers. Automation, autonomy, and physical agency will make computer security a matter of life and death, and not just a matter of data.

CAs Reissue Over One Million Weak Certificates

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/cas_reissue_ove.html

Turns out that the software a bunch of CAs used to generate public-key certificates was flawed: they created random serial numbers with only 63 bits instead of the required 64. That may not seem like a big deal to the layman, but that one bit change means that the serial numbers only have half the required entropy. This really isn’t a security problem; the serial numbers are to protect against attacks that involve weak hash functions, and we don’t allow those weak hash functions anymore. Still, it’s a good thing that the CAs are reissuing the certificates. The point of a standard is that it’s to be followed.

I Was Cited in a Court Decision

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/i_was_cited_in_.html

An article I co-wrote — my first law journal article — was cited by the Massachusetts Supreme Judicial Court — the state supreme court — in a case on compelled decryption.

Here’s the first, in footnote 1:

We understand the word “password” to be synonymous with other terms that cell phone users may be familiar with, such as Personal Identification Number or “passcode.” Each term refers to the personalized combination of letters or digits that, when manually entered by the user, “unlocks” a cell phone. For simplicity, we use “password” throughout. See generally, Kerr & Schneier, Encryption Workarounds, 106 Geo. L.J. 989, 990, 994, 998 (2018).

And here’s the second, in footnote 5:

We recognize that ordinary cell phone users are likely unfamiliar with the complexities of encryption technology. For instance, although entering a password “unlocks” a cell phone, the password itself is not the “encryption key” that decrypts the cell phone’s contents. See Kerr & Schneier, supra at 995. Rather, “entering the [password] decrypts the [encryption] key, enabling the key to be processed and unlocking the phone. This two-stage process is invisible to the casual user.” Id. Because the technical details of encryption technology do not play a role in our analysis, they are not worth belaboring. Accordingly, we treat the entry of a password as effectively decrypting the contents of a cell phone. For a more detailed discussion of encryption technology, see generally Kerr & Schneier, supra.

Critical Flaw in Swiss Internet Voting System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/critical_flaw_i.html

Researchers have found a critical flaw in the Swiss Internet voting system. I was going to write an essay about how this demonstrates that Internet voting is a stupid idea and should never be attempted — and that this system in particular should never be deployed, even if the found flaw is fixed — but Cory Doctorow beat me to it:

The belief that companies can be trusted with this power defies all logic, but it persists. Someone found Swiss Post’s embrace of the idea too odious to bear, and they leaked the source code that Swiss Post had shared under its nondisclosure terms, and then an international team of some of the world’s top security experts (including some of our favorites, like Matthew Green) set about analyzing that code, and (as every security expert who doesn’t work for an e-voting company has predicted since the beginning of time), they found an incredibly powerful bug that would allow a single untrusted party at Swiss Post to undetectably alter the election results.

And, as everyone who’s ever advocated for the right of security researchers to speak in public without permission from the companies whose products they were assessing has predicted since the beginning of time, Swiss Post and Scytl downplayed the importance of this objectively very, very, very important bug. Swiss Post’s position is that since the bug only allows elections to be stolen by Swiss Post employees, it’s not a big deal, because Swiss Post employees wouldn’t steal an election.

But when Swiss Post agreed to run the election, they promised an e-voting system based on “zero knowledge” proofs that would allow voters to trust the outcome of the election without having to trust Swiss Post. Swiss Post is now moving the goalposts, saying that it wouldn’t be such a big deal if you had to trust Swiss Post implicitly to trust the outcome of the election.

You might be thinking, “Well, what is the big deal? If you don’t trust the people administering an election, you can’t trust the election’s outcome, right?” Not really: we design election systems so that multiple, uncoordinated people all act as checks and balances on each other. To suborn a well-run election takes massive coordination at many polling- and counting-places, as well as independent scrutineers from different political parties, as well as outside observers, etc.

Read the whole thing. It’s excellent.

More info.

DARPA Is Developing an Open-Source Voting System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/darpa_is_develo.html

This sounds like a good development:

…a new $10 million contract the Defense Department’s Defense Advanced Research Projects Agency (DARPA) has launched to design and build a secure voting system that it hopes will be impervious to hacking.

The first-of-its-kind system will be designed by an Oregon-based firm called Galois, a longtime government contractor with experience in designing secure and verifiable systems. The system will use fully open source voting software, instead of the closed, proprietary software currently used in the vast majority of voting machines, which no one outside of voting machine testing labs can examine. More importantly, it will be built on secure open source hardware, made from special secure designs and techniques developed over the last year as part of a special program at DARPA. The voting system will also be designed to create fully verifiable and transparent results so that voters don’t have to blindly trust that the machines and election officials delivered correct results.

But DARPA and Galois won’t be asking people to blindly trust that their voting systems are secure — as voting machine vendors currently do. Instead they’ll be publishing source code for the software online and bring prototypes of the systems to the Def Con Voting Village this summer and next, so that hackers and researchers will be able to freely examine the systems themselves and conduct penetration tests to gauge their security. They’ll also be working with a number of university teams over the next year to have them examine the systems in formal test environments.

Judging Facebook’s Privacy Shift

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/judging_faceboo.html

Facebook is making a new and stronger commitment to privacy. Last month, the company hired three of its most vociferous critics and installed them in senior technical positions. And on Wednesday, Mark Zuckerberg wrote that the company will pivot to focus on private conversations over the public sharing that has long defined the platform, even while conceding that “frankly we don’t currently have a strong reputation for building privacy protective services.”

There is ample reason to question Zuckerberg’s pronouncement: The company has made — and broken — many privacy promises over the years. And if you read his 3,000-word post carefully, Zuckerberg says nothing about changing Facebook’s surveillance capitalism business model. All the post discusses is making private chats more central to the company, which seems to be a play for increased market dominance and to counter the Chinese company WeChat.

In security and privacy, the devil is always in the details — and Zuckerberg’s post provides none. But we’ll take him at his word and try to fill in some of the details here. What follows is a list of changes we should expect if Facebook is serious about changing its business model and improving user privacy.

How Facebook treats people on its platform

Increased transparency over advertiser and app accesses to user data. Today, Facebook users can download and view much of the data the company has about them. This is important, but it doesn’t go far enough. The company could be more transparent about what data it shares with advertisers and others and how it allows advertisers to select users they show ads to. Facebook could use its substantial skills in usability testing to help people understand the mechanisms advertisers use to show them ads or the reasoning behind what it chooses to show in user timelines. It could deliver on promises in this area.

Better — and more usable — privacy options. Facebook users have limited control over how their data is shared with other Facebook users and almost no control over how it is shared with Facebook’s advertisers, which are the company’s real customers. Moreover, the controls are buried deep behind complex and confusing menu options. To be fair, some of this is because privacy is complex, and it’s hard to understand the results of different options. But much of this is deliberate; Facebook doesn’t want its users to make their data private from other users.

The company could give people better control over how — and whether — their data is used, shared, and sold. For example, it could allow users to turn off individually targeted news and advertising. By this, we don’t mean simply making those advertisements invisible; we mean turning off the data flows into those tailoring systems. Finally, since most users stick to the default options when it comes to configuring their apps, a changing Facebook could tilt those defaults toward more privacy, requiring less tailoring most of the time.

More user protection from stalking. “Facebook stalking” is often thought of as “stalking light,” or “harmless.” But stalkers are rarely harmless. Facebook should acknowledge this class of misuse and work with experts to build tools that protect all of its users, especially its most vulnerable ones. Such tools should guide normal people away from creepiness and give victims power and flexibility to enlist aid from sources ranging from advocates to police.

Fully ending real-name enforcement. Facebook’s real-names policy, requiring people to use their actual legal names on the platform, hurts people such as activists, victims of intimate partner violence, police officers whose work makes them targets, and anyone with a public persona who wishes to have control over how they identify to the public. There are many ways Facebook can improve on this, from ending enforcement to allowing verifying pseudonyms for everyone­ — not just celebrities like Lady Gaga. Doing so would mark a clear shift.

How Facebook runs its platform

Increased transparency of Facebook’s business practices. One of the hard things about evaluating Facebook is the effort needed to get good information about its business practices. When violations are exposed by the media, as they regularly are, we are all surprised at the different ways Facebook violates user privacy. Most recently, the company used phone numbers provided for two-factor authentication for advertising and networking purposes. Facebook needs to be both explicit and detailed about how and when it shares user data. In fact, a move from discussing “sharing” to discussing “transfers,” “access to raw information,” and “access to derived information” would be a visible improvement.

Increased transparency regarding censorship rules. Facebook makes choices about what content is acceptable on its site. Those choices are controversial, implemented by thousands of low-paid workers quickly implementing unclear rules. These are tremendously hard problems without clear solutions. Even obvious rules like banning hateful words run into challenges when people try to legitimately discuss certain important topics. Whatever Facebook does in this regard, the company needs be more transparent about its processes. It should allow regulators and the public to audit the company’s practices. Moreover, Facebook should share any innovative engineering solutions with the world, much as it currently shares its data center engineering.

Better security for collected user data. There have been numerous examples of attackers targeting cloud service platforms to gain access to user data. Facebook has a large and skilled product security team that says some of the right things. That team needs to be involved in the design trade-offs for features and not just review the near-final designs for flaws. Shutting down a feature based on internal security analysis would be a clear message.

Better data security so Facebook sees less. Facebook eavesdrops on almost every aspect of its users’ lives. On the other hand, WhatsApp — purchased by Facebook in 2014 — provides users with end-to-end encrypted messaging. While Facebook knows who is messaging whom and how often, Facebook has no way of learning the contents of those messages. Recently, Facebook announced plans to combine WhatsApp, Facebook Messenger, and Instagram, extending WhatsApp’s security to the consolidated system. Changing course here would be a dramatic and negative signal.

Collecting less data from outside of Facebook. Facebook doesn’t just collect data about you when you’re on the platform. Because its “like” button is on so many other pages, the company can collect data about you when you’re not on Facebook. It even collects what it calls “shadow profiles” — data about you even if you’re not a Facebook user. This data is combined with other surveillance data the company buys, including health and financial data. Collecting and saving less of this data would be a strong indicator of a new direction for the company.

Better use of Facebook data to prevent violence. There is a trade-off between Facebook seeing less and Facebook doing more to prevent hateful and inflammatory speech. Dozens of people have been killed by mob violence because of fake news spread on WhatsApp. If Facebook were doing a convincing job of controlling fake news without end-to-end encryption, then we would expect to hear how it could use patterns in metadata to handle encrypted fake news.

How Facebook manages for privacy

Create a team measured on privacy and trust. Where companies spend their money tells you what matters to them. Facebook has a large and important growth team, but what team, if any, is responsible for privacy, not as a matter of compliance or pushing the rules, but for engineering? Transparency in how it is staffed relative to other teams would be telling.

Hire a senior executive responsible for trust. Facebook’s current team has been focused on growth and revenue. Its one chief security officer, Alex Stamos, was not replaced when he left in 2018, which may indicate that having an advocate for security on the leadership team led to debate and disagreement. Retaining a voice for security and privacy issues at the executive level, before those issues affected users, was a good thing. Now that responsibility is diffuse. It’s unclear how Facebook measures and assesses its own progress and who might be held accountable for failings. Facebook can begin the process of fixing this by designating a senior executive who is responsible for trust.

Engage with regulators. Much of Facebook’s posturing seems to be an attempt to forestall regulation. Facebook sends lobbyists to Washington and other capitals, and until recently the company sent support staff to politician’s offices. It has secret lobbying campaigns against privacy laws. And Facebook has repeatedly violated a 2011 Federal Trade Commission consent order regarding user privacy. Regulating big technical projects is not easy. Most of the people who understand how these systems work understand them because they build them. Societies will regulate Facebook, and the quality of that regulation requires real education of legislators and their staffs. While businesses often want to avoid regulation, any focus on privacy will require strong government oversight. If Facebook is serious about privacy being a real interest, it will accept both government regulation and community input.

User privacy is traditionally against Facebook’s core business interests. Advertising is its business model, and targeted ads sell better and more profitably — and that requires users to engage with the platform as much as possible. Increased pressure on Facebook to manage propaganda and hate speech could easily lead to more surveillance. But there is pressure in the other direction as well, as users equate privacy with increased control over how they present themselves on the platform.

We don’t expect Facebook to abandon its advertising business model, relent in its push for monopolistic dominance, or fundamentally alter its social networking platforms. But the company can give users important privacy protections and controls without abandoning surveillance capitalism. While some of these changes will reduce profits in the short term, we hope Facebook’s leadership realizes that they are in the best long-term interest of the company.

Facebook talks about community and bringing people together. These are admirable goals, and there’s plenty of value (and profit) in having a sustainable platform for connecting people. But as long as the most important measure of success is short-term profit, doing things that help strengthen communities will fall by the wayside. Surveillance, which allows individually targeted advertising, will be prioritized over user privacy. Outrage, which drives engagement, will be prioritized over feelings of belonging. And corporate secrecy, which allows Facebook to evade both regulators and its users, will be prioritized over societal oversight. If Facebook now truly believes that these latter options are critical to its long-term success as a company, we welcome the changes that are forthcoming.

This essay was co-authored with Adam Shostack, and originally appeared on Medium OneZero. We wrote a similar essay in 2002 about judging Microsoft’s then newfound commitment to security.

On Surveillance in the Workplace

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/on_surveillance.html

Data & Society just published a report entitled “Workplace Monitoring & Surveillance“:

This explainer highlights four broad trends in employee monitoring and surveillance technologies:

  • Prediction and flagging tools that aim to predict characteristics or behaviors of employees or that are designed to identify or deter perceived rule-breaking or fraud. Touted as useful management tools, they can augment biased and discriminatory practices in workplace evaluations and segment workforces into risk categories based on patterns of behavior.
  • Biometric and health data of workers collected through tools like wearables, fitness tracking apps, and biometric timekeeping systems as a part of employer- provided health care programs, workplace wellness, and digital tracking work shifts tools. Tracking non-work-related activities and information, such as health data, may challenge the boundaries of worker privacy, open avenues for discrimination, and raise questions about consent and workers’ ability to opt out of tracking.

  • Remote monitoring and time-tracking used to manage workers and measure performance remotely. Companies may use these tools to decentralize and lower costs by hiring independent contractors, while still being able to exert control over them like traditional employees with the aid of remote monitoring tools. More advanced time-tracking can generate itemized records of on-the-job activities, which can be used to facilitate wage theft or allow employers to trim what counts as paid work time.

  • Gamification and algorithmic management of work activities through continuous data collection. Technology can take on management functions, such as sending workers automated “nudges” or adjusting performance benchmarks based on a worker’s real-time progress, while gamification renders work activities into competitive, game-like dynamics driven by performance metrics. However, these practices can create punitive work environments that place pressures on workers to meet demanding and shifting efficiency benchmarks.

In a blog post about this report, Cory Doctorow mentioned “the adoption curve for oppressive technology, which goes, ‘refugee, immigrant, prisoner, mental patient, children, welfare recipient, blue collar worker, white collar worker.'” I don’t agree with the ordering, but the sentiment is correct. These technologies are generally used first against people with diminished rights: prisoners, children, the mentally ill, and soldiers.

Russia Is Testing Online Voting

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/russia_is_testi.html

This is a bad idea:

A second innovation will allow “electronic absentee voting” within voters’ home precincts. In other words, Russia is set to introduce its first online voting system. The system will be tested in a Moscow neighborhood that will elect a single member to the capital’s city council in September. The details of how the experiment will work are not yet known; the State Duma’s proposal on Internet voting does not include logistical specifics. The Central Election Commission’s reference materials on the matter simply reference “absentee voting, blockchain technology.” When Dmitry Vyatkin, one of the bill’s co-sponsors, attempted to describe how exactly blockchains would be involved in the system, his explanation was entirely disconnected from the actual functions of that technology. A discussion of this new type of voting is planned for an upcoming public forum in Moscow.

Surely the Russians know that online voting is insecure. Could they not care, or do they think the surveillance is worth the risk?

Videos and Links from the Public-Interest Technology Track at the RSA Conference

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/videos_and_link.html

Yesterday at the RSA Conference, I gave a keynote talk about the role of public-interest technologists in cybersecurity. (Video here).

I also hosted a one-day mini-track on the topic. We had six panels, and they were all great. If you missed it live, we have videos:

  • How Public Interest Technologists are Changing the World: Matt Mitchell, Tactical Tech; Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School; and J. Bob Alotta, Astraea Foundation (Moderator). (Video here.)
  • Public Interest Tech in Silicon Valley: Mitchell Baker, Chairwoman, Mozilla Corporation; Cindy Cohn, EFF; and Lucy Vasserman, Software Engineer, Google. (Video here.)

  • Working in Civil Society: Sarah Aoun, Digital Security Technologist; Peter Eckersley, Partnership on AI; Harlo Holmes, Director of Newsroom Digital Security, Freedom of the Press Foundation; and John Scott-Railton, Senior Researcher, Citizen Lab. (Video here.)

  • Government Needs You: Travis Moore, TechCongress; Hashim Mteuzi, Senior Manager, Network Talent Initiative, Code for America; Gigi Sohn, Distinguished Fellow, Georgetown Law Institute for Technology, Law and Policy; and Ashkan Soltani, Independent Consultant. (Video here.)

  • Changing Academia: Latanya Sweeney, Harvard; Dierdre Mulligan, UC Berkeley; and Danny Weitzner, MIT CSAIL. (Video here.)

  • The Future of Public Interest Tech: Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School; Ben Wizner, ACLU; and Jenny Toomey, Director, Internet Freedom, Ford Foundation (Moderator). (Video here.)

I also conducted eight short video interviews with different people involved in public-interest technology: independent security technologist Sarah Aoun, TechCongress’s Travis Moore, Ford Foundation’s Jenny Toomey, CitizenLab’s John-Scott Railton, Dierdre Mulligan from UC Berkeley, ACLU’s Jon Callas, Matt Mitchell of TacticalTech, and Kelley Misata from Sightline Security.

Here is my blog post about the event. Here’s Ford Foundation’s blog post on why they helped me organize the event.

We got some good press coverage about the event. (Hey MeriTalk: you spelled my name wrong.)

Related: Here’s my longer essay on the need for public-interest technologists in Internet security, and my public-interest technology resources page.

And just so we have all the URLs in one place, here is a page from the RSA Conference website with links to all of the videos.

If you liked this mini-track, please rate it highly on your RSA Conference evaluation form. I’d like to do it again next year.

Cybersecurity Insurance Not Paying for NotPetya Losses

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/cybersecurity_i_2.html

This will complicate things:

To complicate matters, having cyber insurance might not cover everyone’s losses. Zurich American Insurance Company refused to pay out a $100 million claim from Mondelez, saying that since the U.S. and other governments labeled the NotPetya attack as an action by the Russian military their claim was excluded under the “hostile or warlike action in time of peace or war” exemption.

I get that $100 million is real money, but the insurance industry needs to figure out how to properly insure commercial networks against this sort of thing.