Tag Archives: national security policy

Backdoor in XZ Utils That Almost Happened

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/04/backdoor-in-xz-utils-that-almost-happened.html

Last week, the Internet dodged a major nation-state attack that would have had catastrophic cybersecurity repercussions worldwide. It’s a catastrophe that didn’t happen, so it won’t get much attention—but it should. There’s an important moral to the story of the attack and its discovery: The security of the global Internet depends on countless obscure pieces of software written and maintained by even more obscure unpaid, distractible, and sometimes vulnerable volunteers. It’s an untenable situation, and one that is being exploited by malicious actors. Yet precious little is being done to remedy it.

Programmers dislike doing extra work. If they can find already-written code that does what they want, they’re going to use it rather than recreate the functionality. These code repositories, called libraries, are hosted on sites like GitHub. There are libraries for everything: displaying objects in 3D, spell-checking, performing complex mathematics, managing an e-commerce shopping cart, moving files around the Internet—everything. Libraries are essential to modern programming; they’re the building blocks of complex software. The modularity they provide makes software projects tractable. Everything you use contains dozens of these libraries: some commercial, some open source and freely available. They are essential to the functionality of the finished software. And to its security.

You’ve likely never heard of an open-source library called XZ Utils, but it’s on hundreds of millions of computers. It’s probably on yours. It’s certainly in whatever corporate or organizational network you use. It’s a freely available library that does data compression. It’s important, in the same way that hundreds of other similar obscure libraries are important.

Many open-source libraries, like XZ Utils, are maintained by volunteers. In the case of XZ Utils, it’s one person, named Lasse Collin. He has been in charge of XZ Utils since he wrote it in 2009. And, at least in 2022, he’s had some “longterm mental health issues.” (To be clear, he is not to blame in this story. This is a systems problem.)

Beginning in at least 2021, Collin was personally targeted. We don’t know by whom, but we have account names: Jia Tan, Jigar Kumar, Dennis Ens. They’re not real names. They pressured Collin to transfer control over XZ Utils. In early 2023, they succeeded. Tan spent the year slowly incorporating a backdoor into XZ Utils: disabling systems that might discover his actions, laying the groundwork, and finally adding the complete backdoor earlier this year. On March 25, Hans Jansen—another fake name—tried to push the various Unix systems to upgrade to the new version of XZ Utils.

And everyone was poised to do so. It’s a routine update. In the span of a few weeks, it would have been part of both Debian and Red Hat Linux, which run on the vast majority of servers on the Internet. But on March 29, another unpaid volunteer, Andres Freund—a real person who works for Microsoft but who was doing this in his spare time—noticed something weird about how much processing the new version of XZ Utils was doing. It’s the sort of thing that could be easily overlooked, and even more easily ignored. But for whatever reason, Freund tracked down the weirdness and discovered the backdoor.

It’s a masterful piece of work. It affects the SSH remote login protocol, basically by adding a hidden piece of functionality that requires a specific key to enable. Someone with that key can use the backdoored SSH to upload and execute an arbitrary piece of code on the target machine. SSH runs as root, so that code could have done anything. Let your imagination run wild.

This isn’t something a hacker just whips up. This backdoor is the result of a years-long engineering effort. The ways the code evades detection in source form, how it lies dormant and undetectable until activated, and its immense power and flexibility give credence to the widely held assumption that a major nation-state is behind this.

If it hadn’t been discovered, it probably would have eventually ended up on every computer and server on the Internet. Though it’s unclear whether the backdoor would have affected Windows and macOS, it would have worked on Linux. Remember in 2020, when Russia planted a backdoor into SolarWinds that affected 14,000 networks? That seemed like a lot, but this would have been orders of magnitude more damaging. And again, the catastrophe was averted only because a volunteer stumbled on it. And it was possible in the first place only because the first unpaid volunteer, someone who turned out to be a national security single point of failure, was personally targeted and exploited by a foreign actor.

This is no way to run critical national infrastructure. And yet, here we are. This was an attack on our software supply chain. This attack subverted software dependencies. The SolarWinds attack targeted the update process. Other attacks target system design, development, and deployment. Such attacks are becoming increasingly common and effective, and also are increasingly the weapon of choice of nation-states.

It’s impossible to count how many of these single points of failure are in our computer systems. And there’s no way to know how many of the unpaid and unappreciated maintainers of critical software libraries are vulnerable to pressure. (Again, don’t blame them. Blame the industry that is happy to exploit their unpaid labor.) Or how many more have accidentally created exploitable vulnerabilities. How many other coercion attempts are ongoing? A dozen? A hundred? It seems impossible that the XZ Utils operation was a unique instance.

Solutions are hard. Banning open source won’t work; it’s precisely because XZ Utils is open source that an engineer discovered the problem in time. Banning software libraries won’t work, either; modern software can’t function without them. For years, security engineers have been pushing something called a “software bill of materials”: an ingredients list of sorts so that when one of these packages is compromised, network owners at least know if they’re vulnerable. The industry hates this idea and has been fighting it for years, but perhaps the tide is turning.

The fundamental problem is that tech companies dislike spending extra money even more than programmers dislike doing extra work. If there’s free software out there, they are going to use it—and they’re not going to do much in-house security testing. Easier software development equals lower costs equals more profits. The market economy rewards this sort of insecurity.

We need some sustainable ways to fund open-source projects that become de facto critical infrastructure. Public shaming can help here. The Open Source Security Foundation (OSSF), founded in 2022 after another critical vulnerability in an open-source library—Log4j—was discovered, addresses this problem. The big tech companies pledged $30 million in funding after the critical Log4j supply chain vulnerability, but they never delivered. And they are still happy to make use of all this free labor and free resources, as a recent Microsoft anecdote indicates. The companies benefiting from these freely available libraries need to actually step up, and the government can force them to.

There’s a lot of tech that could be applied to this problem, if corporations were willing to spend the money. Liabilities will help. The Cybersecurity and Infrastructure Security Agency’s (CISA’s) “secure by design” initiative will help, and CISA is finally partnering with OSSF on this problem. Certainly the security of these libraries needs to be part of any broad government cybersecurity initiative.

We got extraordinarily lucky this time, but maybe we can learn from the catastrophe that didn’t happen. Like the power grid, communications network, and transportation systems, the software supply chain is critical infrastructure, part of national security, and vulnerable to foreign attack. The US government needs to recognize this as a national security problem and start treating it as such.

This essay originally appeared in Lawfare.

OpenAI Is Not Training on Your Dropbox Documents—Today

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/12/openai-is-not-training-on-your-dropbox-documents-today.html

There’s a rumor flying around the Internet that OpenAI is training foundation models on your Dropbox documents.

Here’s CNBC. Here’s Boing Boing. Some articles are more nuanced, but there’s still a lot of confusion.

It seems not to be true. Dropbox isn’t sharing all of your documents with OpenAI. But here’s the problem: we don’t trust OpenAI. We don’t trust tech corporations. And—to be fair—corporations in general. We have no reason to.

Simon Willison nails it in a tweet:

“OpenAI are training on every piece of data they see, even when they say they aren’t” is the new “Facebook are showing you ads based on overhearing everything you say through your phone’s microphone.”

Willison expands this in a blog post, which I strongly recommend reading in its entirety. His point is that these companies have lost our trust:

Trust is really important. Companies lying about what they do with your privacy is a very serious allegation.

A society where big companies tell blatant lies about how they are handling our data—­and get away with it without consequences­—is a very unhealthy society.

A key role of government is to prevent this from happening. If OpenAI are training on data that they said they wouldn’t train on, or if Facebook are spying on us through our phone’s microphones, they should be hauled in front of regulators and/or sued into the ground.

If we believe that they are doing this without consequence, and have been getting away with it for years, our intolerance for corporate misbehavior becomes a victim as well. We risk letting companies get away with real misconduct because we incorrectly believed in conspiracy theories.

Privacy is important, and very easily misunderstood. People both overestimate and underestimate what companies are doing, and what’s possible. This isn’t helped by the fact that AI technology means the scope of what’s possible is changing at a rate that’s hard to appreciate even if you’re deeply aware of the space.

If we want to protect our privacy, we need to understand what’s going on. More importantly, we need to be able to trust companies to honestly and clearly explain what they are doing with our data.

On a personal level we risk losing out on useful tools. How many people cancelled their Dropbox accounts in the last 48 hours? How many more turned off that AI toggle, ruling out ever evaluating if those features were useful for them or not?

And while Dropbox is not sending your data to OpenAI today, it could do so tomorrow with a simple change of its terms of service. So could your bank, or credit card company, your phone company, or any other company that owns your data. Any of the tens of thousands of data brokers could be sending your data to train AI models right now, without your knowledge or consent. (At least, in the US. Hooray for the EU and GDPR.)

Or, as Thomas Claburn wrote:

“Your info won’t be harvested for training” is the new “Your private chatter won’t be used for ads.”

These foundation models want our data. The corporations that have our data want the money. It’s only a matter of time, unless we get serious government privacy regulation.

EPA Won’t Force Water Utilities to Audit Their Cybersecurity

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/10/epa-wont-force-water-utilities-to-audit-their-cybersecurity.html

The industry pushed back:

Despite the EPA’s willingness to provide training and technical support to help states and public water system organizations implement cybersecurity surveys, the move garnered opposition from both GOP state attorneys and trade groups.

Republican state attorneys that were against the new proposed policies said that the call for new inspections could overwhelm state regulators. The attorney generals of Arkansas, Iowa and Missouri all sued the EPA—claiming the agency had no authority to set these requirements. This led to the EPA’s proposal being temporarily blocked back in June.

So now we have a piece of our critical infrastructure with substandard cybersecurity. This seems like a really bad outcome.

Child Exploitation and the Crypto Wars

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/10/child-exploitation-and-the-crypto-wars.html

Susan Landau published an excellent essay on the current justification for the government breaking end-to-end-encryption: child sexual abuse and exploitation (CSAE). She puts the debate into historical context, discusses the problem of CSAE, and explains why breaking encryption isn’t the solution.

AI Risks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/10/ai-risks.html

There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in AI technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.

The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts, and the public declarations issued about AI are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns.

The result is a cacophony of coded language, contradictory views, and provocative policy demands that are undermining our ability to grapple with a technology destined to drive the future of politics, our economy, and even our daily lives.

These factions are in dialogue not only with the public but also with one another. Sometimes, they trade letters, opinion essays, or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view AI But if lawmakers and the public fail to recognize the subtext of their arguments, they risk missing the real consequences of our possible regulatory and cultural paths forward.

To understand the fight and the impact it may have on our shared future, look past the immediate claims and actions of the players to the greater implications of their points of view. When you do, you’ll realize this isn’t really a debate only about AI. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.

Beneath this roiling discord is a true fight over the future of society. Should we focus on avoiding the dystopia of mass unemployment, a world where China is the dominant superpower or a society where the worst prejudices of humanity are embodied in opaque algorithms that control our lives? Should we listen to wealthy futurists who discount the importance of climate change because they’re already thinking ahead to colonies on Mars? It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of AI to stay true to the humanity of our values.

One way to decode the motives behind the various declarations is through their language. Because language itself is part of their battleground, the different AI camps tend not to use the same words to describe their positions. One faction describes the dangers posed by AI through the framework of safety, another through ethics or integrity, yet another through security, and others through economics. By decoding who is speaking and how AI is being described, we can explore where these groups differ and what drives their views.

The Doomsayers

The loudest perspective is a frightening, dystopian vision in which AI poses an existential risk to humankind, capable of wiping out all life on Earth. AI, in this vision, emerges as a godlike, superintelligent, ungovernable entity capable of controlling everything. AI could destroy humanity or pose a risk on par with nukes. If we’re not careful, it could kill everyone or enslave humanity. It’s likened to monsters like the Lovecraftian shoggoths, artificial servants that rebelled against their creators, or paper clip maximizers that consume all of Earth’s resources in a single-minded pursuit of their programmed goal. It sounds like science fiction, but these people are serious, and they mean the words they use.

These are the AI safety people, and their ranks include the “Godfathers of AI,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind. Having steamrollered the public conversation by creating large language models like ChatGPT and other AI tools capable of increasingly impressive feats, they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.

This doomsaying is boosted by a class of tech elite that has enormous power to shape the conversation. And some in this group are animated by the radical effective altruism movement and the associated cause of long-term-ism, which tend to focus on the most extreme catastrophic risks and emphasize the far-future consequences of our actions. These philosophies are hot among the cryptocurrency crowd, like the disgraced former billionaire Sam Bankman-Fried, who at one time possessed sudden wealth in search of a cause.

Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like AI enslavement.

Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future. In the name of long-term-ism, Elon Musk reportedly believes that our society needs to encourage reproduction among those with the greatest culture and intelligence (namely, his ultrarich buddies). And he wants to go further, such as limiting the right to vote to parents and even populating Mars. It’s widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the most prominent centers for the study of AI safety, has made dismissive noises about climate change because he thinks that it pales in comparison with far-future unknown unknowns like risks from AI. The technology historian David C. Brock calls these fears “wishful worries”—that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”

More practically, many of the researchers in this group are proceeding full steam ahead in developing AI, demonstrating how unrealistic it is to simply hit pause on technological development. But the roboticist Rodney Brooks has pointed out that we will see the existential risks coming—the dangers will not be sudden and we will have time to change course. While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of AI and, most important, not allow them to strategically distract from more immediate concerns. Let’s not let apocalyptic prognostications overwhelm us and smother the momentum we need to develop critical guardrails.

The Reformers

While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower. Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.

The alternative to the end-of-the-world, existential risk narrative is a distressingly familiar vision of dystopia: a society in which humanity’s worst instincts are encoded into and enforced by machines. The doomsayers think AI enslavement looks like the Matrix; the reformers point to modern-day contractors doing traumatic work at low pay for OpenAI in Kenya.

Propagators of these AI ethics concerns—like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury, and Cathy O’Neil—have been raising the alarm on inequities coded into AI for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women, and people who identify as LGBTQ. They are often motivated by insight into what it feels like to be on the wrong end of algorithmic oppression and by a connection to the communities most vulnerable to the misuse of new technology. Many in this group take an explicitly social perspective: When Joy Buolamwini founded an organization to fight for equitable AI, she called it the Algorithmic Justice League. Ruha Benjamin called her organization the Ida B. Wells Just Data Lab.

Others frame efforts to reform AI in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside—or even above—their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the AI revolution have, at times, been eliminating safeguards. A signal moment came when Timnit Gebru, a co-leader of Google’s AI ethics team, was dismissed for pointing out the risks of developing ever-larger AI language models.

While doomsayers and reformers share the concern that AI must align with human interests, reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by AI misinformation, surveillance, and inequity. Integrity experts call for the development of responsible AI, for civic education to ensure AI literacy and for keeping humans front and center in AI systems.

This group’s concerns are well documented and urgent—and far older than modern AI technologies. Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that AI might kill us in the future should still demand that it not profile and exploit us in the present.

The Warriors

Other groups of prognosticators cast the rise of AI through the language of competitiveness and national security. One version has a post-9/11 ring to it—a world where terrorists, criminals, and psychopaths have unfettered access to technologies of mass destruction. Another version is a Cold War narrative of the United States losing an AI arms race with China and its surveillance-rich society.

Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.

OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant AI companies, are pushing for AI regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading AI companies while restricting competition from start-ups. In the lobbying battles over Europe’s trailblazing AI regulatory framework, US megacompanies pleaded to exempt their general-purpose AI from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”

Any technology critical to national defense usually has an easier time avoiding oversight, regulation, and limitations on profit. Any readiness gap in our military demands urgent budget increases and funds distributed to the military branches and their contractors, because we may soon be called upon to fight. Tech moguls like Google’s former chief executive Eric Schmidt, who has the ear of many lawmakers, signal to American policymakers about the Chinese threat even as they invest in US national security concerns.

The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-twentieth century. AI research is fundamentally international; no one country will win a monopoly. And while national security is important to consider, we must also be mindful of self-interest of those positioned to benefit financially.


As the science-fiction author Ted Chiang has said, fears about the existential risks of AI are really fears about the threat of uncontrolled capitalism, and dystopias like the paper clip maximizer are just caricatures of every start-up’s business plan. Cosma Shalizi and Henry Farrell further argue that “we’ve lived among shoggoths for centuries, tending to them as though they were our masters” as monopolistic platforms devour and exploit the totality of humanity’s labor and ingenuity for their own interests. This dread applies as much to our future with AI as it does to our past and present with corporations.

Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with AI, China, and the fights picked among robber barons.

By analogy to the healthcare sector, we need an AI public option to truly keep AI companies in check. A publicly directed AI development project would serve to counterbalance for-profit corporate AI and help ensure an even playing field for access to the twenty-first century’s key technology while offering a platform for the ethical development and use of AI.

Also, we should embrace the humanity behind AI. We can hold founders and corporations accountable by mandating greater AI transparency in the development stage, in addition to applying legal standards for actions associated with AI. Remarkably, this is something that both the left and the right can agree on.

Ultimately, we need to make sure the network of laws and regulations that govern our collective behavior is knit more strongly, with fewer gaps and greater ability to hold the powerful accountable, particularly in those areas most sensitive to our democracy and environment. As those with power and privilege seem poised to harness AI to accumulate much more or pursue extreme ideologies, let’s think about how we can constrain their influence in the public square rather than cede our attention to their most bombastic nightmare visions for the future.

This essay was written with Nathan Sanders, and previously appeared in the New York Times.

NSA AI Security Center

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/10/nsa-ai-security-center.html

The NSA is starting a new artificial intelligence security center:

The AI security center’s establishment follows an NSA study that identified securing AI models from theft and sabotage as a major national security challenge, especially as generative AI technologies emerge with immense transformative potential for both good and evil.

Nakasone said it would become “NSA’s focal point for leveraging foreign intelligence insights, contributing to the development of best practices guidelines, principles, evaluation, methodology and risk frameworks” for both AI security and the goal of promoting the secure development and adoption of AI within “our national security systems and our defense industrial base.”

He said it would work closely with U.S. industry, national labs, academia and the Department of Defense as well as international partners.

You Can’t Rush Post-Quantum-Computing Cryptography Standards

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/08/you-cant-rush-post-quantum-computing-standards.html

I just read an article complaining that NIST is taking too long in finalizing its post-quantum-computing cryptography standards.

This process has been going on since 2016, and since that time there has been a huge increase in quantum technology and an equally large increase in quantum understanding and interest. Yet seven years later, we have only four algorithms, although last week NIST announced that a number of other candidates are under consideration, a process that is expected to take “several years.

The delay in developing quantum-resistant algorithms is especially troubling given the time it will take to get those products to market. It generally takes four to six years with a new standard for a vendor to develop an ASIC to implement the standard, and it then takes time for the vendor to get the product validated, which seems to be taking a troubling amount of time.

Yes, the process will take several years, and you really don’t want to rush it. I wrote this last year:

Ian Cassels, British mathematician and World War II cryptanalyst, once said that “cryptography is a mixture of mathematics and muddle, and without the muddle the mathematics can be used against you.” This mixture is particularly difficult to achieve with public-key algorithms, which rely on the mathematics for their security in a way that symmetric algorithms do not. We got lucky with RSA and related algorithms: their mathematics hinge on the problem of factoring, which turned out to be robustly difficult. Post-quantum algorithms rely on other mathematical disciplines and problems­—code-based cryptography, hash-based cryptography, lattice-based cryptography, multivariate cryptography, and so on­—whose mathematics are both more complicated and less well-understood. We’re seeing these breaks because those core mathematical problems aren’t nearly as well-studied as factoring is.

[…]

As the new cryptanalytic results demonstrate, we’re still learning a lot about how to turn hard mathematical problems into public-key cryptosystems. We have too much math and an inability to add more muddle, and that results in algorithms that are vulnerable to advances in mathematics. More cryptanalytic results are coming, and more algorithms are going to be broken.

As to the long time it takes to get new encryption products to market, work on shortening it:

The moral is the need for cryptographic agility. It’s not enough to implement a single standard; it’s vital that our systems be able to easily swap in new algorithms when required.

Whatever NIST comes up with, expect that it will get broken sooner than we all want. It’s the nature of these trap-door functions we’re using for public-key cryptography.

New SEC Rules around Cybersecurity Incident Disclosures

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/08/new-sec-rules-around-cybersecurity-incident-disclosures.html

The US Securities and Exchange Commission adopted final rules around the disclosure of cybersecurity incidents. There are two basic rules:

  1. Public companies must “disclose any cybersecurity incident they determine to be material” within four days, with potential delays if there is a national security risk.
  2. Public companies must “describe their processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats” in their annual filings.

The rules go into effect this December.

In an email newsletter, Melissa Hathaway wrote:

Now that the rule is final, companies have approximately six months to one year to document and operationalize the policies and procedures for the identification and management of cybersecurity (information security/privacy) risks. Continuous assessment of the risk reduction activities should be elevated within an enterprise risk management framework and process. Good governance mechanisms delineate the accountability and responsibility for ensuring successful execution, while actionable, repeatable, meaningful, and time-dependent metrics or key performance indicators (KPI) should be used to reinforce realistic objectives and timelines. Management should assess the competency of the personnel responsible for implementing these policies and be ready to identify these people (by name) in their annual filing.

News article.

Commentary on the Implementation Plan for the 2023 US National Cybersecurity Strategy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/commentary-on-the-implementation-plan-for-the-2023-us-national-cybersecurity-strategy.html

The Atlantic Council released a detailed commentary on the White House’s new “Implementation Plan for the 2023 US National Cybersecurity Strategy.” Lots of interesting bits.

So far, at least three trends emerge:

First, the plan contains a (somewhat) more concrete list of actions than its parent strategy, with useful delineation of lead and supporting agencies, as well as timelines aplenty. By assigning each action a designated lead and timeline, and by including a new nominal section (6) focused entirely on assessing effectiveness and continued iteration, the ONCD suggests that this is not so much a standalone text as the framework for an annual, crucially iterative policy process. That many of the milestones are still hazy might be less important than the commitment. the administration has made to revisit this plan annually, allowing the ONCD team to leverage their unique combination of topical depth and budgetary review authority.

Second, there are clear wins. Open-source software (OSS) and support for energy-sector cybersecurity receive considerable focus, and there is a greater budgetary push on both technology modernization and cybersecurity research. But there are missed opportunities as well. Many of the strategy’s most difficult and revolutionary goals—­holding data stewards accountable through privacy legislation, finally implementing a working digital identity solution, patching gaps in regulatory frameworks for cloud risk, and implementing a regime for software cybersecurity liability—­have been pared down or omitted entirely. There is an unnerving absence of “incentive-shifting-focused” actions, one of the most significant overarching objectives from the initial strategy. This backpedaling may be the result of a new appreciation for a deadlocked Congress and the precarious present for the administrative state, but it falls short of the original strategy’s vision and risks making no progress against its most ambitious goals.

Third, many of the implementation plan’s goals have timelines stretching into 2025. The disruption of a transition, be it to a second term for the current administration or the first term of another, will be difficult to manage under the best of circumstances. This leaves still more of the boldest ideas in this plan in jeopardy and raises questions about how best to prioritize, or accelerate, among those listed here.

Paragon Solutions Spyware: Graphite

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/06/paragon-solutions-spyware-graphite.html

Paragon Solutions is yet another Israeli spyware company. Their product is called “Graphite,” and is a lot like NSO Group’s Pegasus. And Paragon is working with what seems to be US approval:

American approval, even if indirect, has been at the heart of Paragon’s strategy. The company sought a list of allied nations that the US wouldn’t object to seeing deploy Graphite. People with knowledge of the matter suggested 35 countries are on that list, though the exact nations involved could not be determined. Most were in the EU and some in Asia, the people said.

Remember when NSO Group was banned in the US a year and a half ago? The Drug Enforcement Agency uses Graphite.

We’re never going to reduce the power of these cyberweapons arms merchants by going after them one by one. We need to deal with the whole industry. And we’re not going to do it as long as the democracies of the world use their products as well.

Expeditionary Cyberspace Operations

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/expeditionary-cyberspace-operations.html

Cyberspace operations now officially has a physical dimension, meaning that the United States has official military doctrine about cyberattacks that also involve an actual human gaining physical access to a piece of computing infrastructure.

A revised version of Joint Publication 3-12 Cyberspace Operations—published in December 2022 and while unclassified, is only available to those with DoD common access cards, according to a Joint Staff spokesperson—officially provides a definition for “expeditionary cyberspace operations,” which are “[c]yberspace operations that require the deployment of cyberspace forces within the physical domains.”

[…]

“Developing access to targets in or through cyberspace follows a process that can often take significant time. In some cases, remote access is not possible or preferable, and close proximity may be required, using expeditionary [cyber operations],” the joint publication states. “Such operations are key to addressing the challenge of closed networks and other systems that are virtually isolated. Expeditionary CO are often more regionally and tactically focused and can include units of the CMF or special operations forces … If direct access to the target is unavailable or undesired, sometimes a similar or partial effect can be created by indirect access using a related target that has higher-order effects on the desired target.”

[…]

“Allowing them to support [combatant commands] in this way permits faster adaptation to rapidly changing needs and allows threats that initially manifest only in one [area of responsibility] to be mitigated globally in near real time. Likewise, while synchronizing CO missions related to achieving [combatant commander] objectives, some cyberspace capabilities that support this activity may need to be forward-deployed; used in multiple AORs simultaneously; or, for speed in time-critical situations, made available via reachback,” it states. “This might involve augmentation or deployment of cyberspace capabilities to forces already forward or require expeditionary CO by deployment of a fully equipped team of personnel and capabilities.”

New National Cybersecurity Strategy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/03/new-national-cybersecurity-strategy.html

Last week, the Biden administration released a new National Cybersecurity Strategy (summary here). There is lots of good commentary out there. It’s basically a smart strategy, but the hard parts are always the implementation details. It’s one thing to say that we need to secure our cloud infrastructure, and another to detail what the means technically, who pays for it, and who verifies that it’s been done.

One of the provisions getting the most attention is a move to shift liability to software vendors, something I’ve been advocating for since at least 2003.

Slashdot thread.

Banning TikTok

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/02/banning-tiktok.html

Congress is currently debating bills that would ban TikTok in the United States. We are here as technologists to tell you that this is a terrible idea and the side effects would be intolerable. Details matter. There are several ways Congress might ban TikTok, each with different efficacies and side effects. In the end, all the effective ones would destroy the free Internet as we know it.

There’s no doubt that TikTok and ByteDance, the company that owns it, are shady. They, like most large corporations in China, operate at the pleasure of the Chinese government. They collect extreme levels of information about users. But they’re not alone: Many apps you use do the same, including Facebook and Instagram, along with seemingly innocuous apps that have no need for the data. Your data is bought and sold by data brokers you’ve never heard of who have few scruples about where the data ends up. They have digital dossiers on most people in the United States.

If we want to address the real problem, we need to enact serious privacy laws, not security theater, to stop our data from being collected, analyzed, and sold—by anyone. Such laws would protect us in the long term, and not just from the app of the week. They would also prevent data breaches and ransomware attacks from spilling our data out into the digital underworld, including hacker message boards and chat servers, hostile state actors, and outside hacker groups. And, most importantly, they would be compatible with our bedrock values of free speech and commerce, which Congress’s current strategies are not.

At best, the TikTok ban considered by Congress would be ineffective; at worst, a ban would force us to either adopt China’s censorship technology or create our own equivalent. The simplest approach, advocated by some in Congress, would be to ban the TikTok app from the Apple and Google app stores. This would immediately stop new updates for current users and prevent new users from signing up. To be clear, this would not reach into phones and remove the app. Nor would it prevent Americans from installing TikTok on their phones; they would still be able to get it from sites outside of the United States. Android users have long been able to use alternative app repositories. Apple maintains a tighter control over what apps are allowed on its phones, so users would have to “jailbreak”—or manually remove restrictions from—their devices to install TikTok.

Even if app access were no longer an option, TikTok would still be available more broadly. It is currently, and would still be, accessible from browsers, whether on a phone or a laptop. As long as the TikTok website is hosted on servers outside of the United States, the ban would not affect browser access.

Alternatively, Congress might take a financial approach and ban US companies from doing business with ByteDance. Then-President Donald Trump tried this in 2020, but it was blocked by the courts and rescinded by President Joe Biden a year later. This would shut off access to TikTok in app stores and also cut ByteDance off from the resources it needs to run TikTok. US cloud-computing and content-distribution networks would no longer distribute TikTok videos, collect user data, or run analytics. US advertisers—and this is critical—could no longer fork over dollars to ByteDance in the hopes of getting a few seconds of a user’s attention. TikTok, for all practical purposes, would cease to be a business in the United States.

But Americans would still be able to access TikTok through the loopholes discussed above. And they will: TikTok is one of the most popular apps ever made; about 70% of young people use it. There would be enormous demand for workarounds. ByteDance could choose to move its US-centric services right over the border to Canada, still within reach of American users. Videos would load slightly slower, but for today’s TikTok users, it would probably be acceptable. Without US advertisers ByteDance wouldn’t make much money, but it has operated at a loss for many years, so this wouldn’t be its death knell.

Finally, an even more restrictive approach Congress might take is actually the most dangerous: dangerous to Americans, not to TikTok. Congress might ban the use of TikTok by anyone in the United States. The Trump executive order would likely have had this effect, were it allowed to take effect. It required that US companies not engage in any sort of transaction with TikTok and prohibited circumventing the ban. . If the same restrictions were enacted by Congress instead, such a policy would leave business or technical implementation details to US companies, enforced through a variety of law enforcement agencies.

This would be an enormous change in how the Internet works in the United States. Unlike authoritarian states such as China, the US has a free, uncensored Internet. We have no technical ability to ban sites the government doesn’t like. Ironically, a blanket ban on the use of TikTok would necessitate a national firewall, like the one China currently has, to spy on and censor Americans’ access to the Internet. Or, at the least, authoritarian government powers like India’s, which could force Internet service providers to censor Internet traffic. Worse still, the main vendors of this censorship technology are in those authoritarian states. China, for example, sells its firewall technology to other censorship-loving autocracies such as Iran and Cuba.

All of these proposed solutions raise constitutional issues as well. The First Amendment protects speech and assembly. For example, the recently introduced Buck-Hawley bill, which instructs the president to use emergency powers to ban TikTok, might threaten separation of powers and may be relying on the same mechanisms used by Trump and stopped by the court. (Those specific emergency powers, provided by the International Emergency Economic Powers Act, have a specific exemption for communications services.) And individual states trying to beat Congress to the punch in regulating TikTok or social media generally might violate the Constitution’s Commerce Clause—which restricts individual states from regulating interstate commerce—in doing so.

Right now, there’s nothing to stop Americans’ data from ending up overseas. We’ve seen plenty of instances—from Zoom to Clubhouse to others—where data about Americans collected by US companies ends up in China, not by accident but because of how those companies managed their data. And the Chinese government regularly steals data from US organizations for its own use: Equifax, Marriott Hotels, and the Office of Personnel Management are examples.

If we want to get serious about protecting national security, we have to get serious about data privacy. Today, data surveillance is the business model of the Internet. Our personal lives have turned into data; it’s not possible to block it at our national borders. Our data has no nationality, no cost to copy, and, currently, little legal protection. Like water, it finds every crack and flows to every low place. TikTok won’t be the last app or service from abroad that becomes popular, and it is distressingly ordinary in terms of how much it spies on us. Personal privacy is now a matter of national security. That needs to be part of any debate about banning TikTok.

This essay was written with Barath Raghavan, and previously appeared in Foreign Policy.

Passwords Are Terrible (Surprising No One)

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/02/passwords-are-terrible-surprising-no-one.html

This is the result of a security audit:

More than a fifth of the passwords protecting network accounts at the US Department of the Interior—including Password1234, Password1234!, and ChangeItN0w!—were weak enough to be cracked using standard methods, a recently published security audit of the agency found.

[…]

The results weren’t encouraging. In all, the auditors cracked 18,174—or 21 percent—­of the 85,944 cryptographic hashes they tested; 288 of the affected accounts had elevated privileges, and 362 of them belonged to senior government employees. In the first 90 minutes of testing, auditors cracked the hashes for 16 percent of the department’s user accounts.

The audit uncovered another security weakness—the failure to consistently implement multi-factor authentication (MFA). The failure extended to 25—­or 89 percent—­of 28 high-value assets (HVAs), which, when breached, have the potential to severely impact agency operations.

Original story:

To make their point, the watchdog spent less than $15,000 on building a password-cracking rig—a setup of a high-performance computer or several chained together ­- with the computing power designed to take on complex mathematical tasks, like recovering hashed passwords. Within the first 90 minutes, the watchdog was able to recover nearly 14,000 employee passwords, or about 16% of all department accounts, including passwords like ‘Polar_bear65’ and ‘Nationalparks2014!’.

NIST Is Updating Its Cybersecurity Framework

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/nist-is-updating-its-cybersecurity-framework.html

NIST is planning a significant update of its Cybersecurity Framework. At this point, it’s asking for feedback and comments to its concept paper.

  1. Do the proposed changes reflect the current cybersecurity landscape (standards, risks, and technologies)?
  2. Are the proposed changes sufficient and appropriate? Are there other elements that should be considered under each area?
  3. Do the proposed changes support different use cases in various sectors, types, and sizes of organizations (and with varied capabilities, resources, and technologies)?
  4. Are there additional changes not covered here that should be considered?
  5. For those using CSF 1.1, would the proposed changes affect continued adoption of the Framework, and how so?
  6. For those not using the Framework, would the proposed changes affect the potential use of the Framework?

The NIST Cybersecurity Framework has turned out to be an excellent resource. If you use it at all, please help with version 2.0.

US Cyber Command Operations During the 2022 Midterm Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/us-cyber-command-operations-during-the-2022-midterm-elections.html

The head of both US Cyber Command and the NSA, Gen. Paul Nakasone, broadly discussed that first organization’s offensive cyber operations during the runup to the 2022 midterm elections. He didn’t name names, of course:

We did conduct operations persistently to make sure that our foreign adversaries couldn’t utilize infrastructure to impact us,” said Nakasone. “We understood how foreign adversaries utilize infrastructure throughout the world. We had that mapped pretty well. And we wanted to make sure that we took it down at key times.”

Nakasone noted that Cybercom’s national mission force, aided by NSA, followed a “campaign plan” to deprive the hackers of their tools and networks. “Rest assured,” he said. “We were doing operations well before the midterms began, and we were doing operations likely on the day of the midterms.” And they continued until the elections were certified, he said.

We know Cybercom did similar things in 2018 and 2020, and presumably will again in two years.

Bulk Surveillance of Money Transfers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/bulk-surveillance-of-money-transfers.html

Just another obscure warrantless surveillance program.

US law enforcement can access details of money transfers without a warrant through an obscure surveillance program the Arizona attorney general’s office created in 2014. A database stored at a nonprofit, the Transaction Record Analysis Center (TRAC), provides full names and amounts for larger transfers (above $500) sent between the US, Mexico and 22 other regions through services like Western Union, MoneyGram and Viamericas. The program covers data for numerous Caribbean and Latin American countries in addition to Canada, China, France, Malaysia, Spain, Thailand, Ukraine and the US Virgin Islands. Some domestic transfers also enter the data set.

[…]

You need to be a member of law enforcement with an active government email account to use the database, which is available through a publicly visible web portal. Leber told The Journal that there haven’t been any known breaches or instances of law enforcement misuse. However, Wyden noted that the surveillance program included more states and countries than previously mentioned in briefings. There have also been subpoenas for bulk money transfer data from Homeland Security Investigations (which withdrew its request after Wyden’s inquiry), the DEA and the FBI.

How is it that Arizona can be in charge of this?

Wall Street Journal podcast—with transcript—on the program. I think the original reporting was from last March, but I missed it back then.

The US Has a Shortage of Bomb-Sniffing Dogs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/the-us-has-a-shortage-of-bomb-sniffing-dogs.html

Nothing beats a dog’s nose for detecting explosives. Unfortunately, there aren’t enough dogs:

Last month, the US Government Accountability Office (GAO) released a nearly 100-page report about working dogs and the need for federal agencies to better safeguard their health and wellness. The GOA says that as of February the US federal government had approximately 5,100 working dogs, including detection dogs, across three federal agencies. Another 420 dogs “served the federal government in 24 contractor-managed programs within eight departments and two independent agencies,” the GAO report says.

The report also underscores the demands placed on detection dogs and the potential for overwork if there aren’t enough dogs available. “Working dogs might need the strength to suddenly run fast, or to leap over a tall barrier, as well as the physical stamina to stand or walk all day,” the report says. “They might need to search over rubble or in difficult environmental conditions, such as extreme heat or cold, often wearing heavy body armor. They also might spend the day detecting specific scents among thousands of others, requiring intense mental concentration. Each function requires dogs to undergo specialized training.”

A decade and a half ago I was optimistic about bomb-sniffing bees and wasps, but nothing seems to have come of that.

Regulating DAOs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/regulating-daos.html

In August, the US Treasury’s Office of Foreign Assets Control (OFAC) sanctioned the cryptocurrency platform Tornado Cash, a virtual currency “mixer” designed to make it harder to trace cryptocurrency transactions—and a worldwide favorite money-laundering platform. Americans are now forbidden from using it. According to the US government, Tornado Cash was sanctioned because it allegedly laundered over $7 billion in cryptocurrency, $455 million of which was stolen by a North Korean state-sponsored hacking group.

Tornado Cash is not a traditional company run by human beings, but instead a series of “smart contracts”: self-executing code that exists only as software. Critics argue that prohibiting Americans from using Tornado Cash is a restraint of free speech, pointing to court rulings in the 1990s that established that computer language is a form of language, and that software programs are a form of speech. They also suggest that the Treasury Department has the authority to sanction only humans and not software.

We think that the most useful way to understand the speech issues involved with regulating Tornado Cash and other decentralized autonomous organizations (DAOs) is through an analogy: the golem. There are many versions of the Jewish golem legend, but in most of them, a person-like clay statue comes to life after someone writes the word “truth” in Hebrew on its forehead, and eventually starts doing terrible things. The golem stops only when a rabbi erases one of those letters, turning “truth” into the Hebrew word for “death,” and the golem ceases to function.

The analogy between DAOs and golems is quite precise, and has important consequences for the relationship between free speech and code. Ultimately, just as the golem needed the intervention of a rabbi to stop wreaking havoc on the world, so too do DAOs need to be subject to regulation.

The equivalency of code and free speech was established during the first “crypto wars” of the 1990s, which were about cryptography, not cryptocurrencies. US agencies tried to use export control laws to prevent sophisticated cryptography software from being exported outside the US. Activists and lawyers cleverly showed how code could be transformed into speech and vice versa, turning the source code for a cryptographic product into a printed book and daring US authorities to prevent its export. In 1996, US District Judge Marilyn Hall Patel ruled that computer code is a language, just like German or French, and that coded programs deserve First Amendment protection. That such code is also functional, instructing a computer to do something, was irrelevant to its expressive capabilities, according to Patel’s ruling. However, both a concurring and dissenting opinion argued that computer code also has the “functional purpose of controlling computers and, in that regard, does not command protection under the First Amendment.”

This disagreement highlights the awkward distinction between ordinary language and computer code. Language does not change the world, except insofar as it persuades, informs, or compels other people. Code, however, is a language where words have inherent power. Type the appropriate instructions and the computer will implement them without hesitation, second-guessing, or independence of will. They are like the words inscribed on a golem’s forehead (or the written instructions that, in some versions of the folklore, are placed in its mouth). The golem has no choice, because it is incapable of making choices. The words are code, and the golem is no different from a computer.

Unlike ordinary organizations, DAOs don’t rely on human beings to carry out many of their core functions. Instead, those functions have been translated into a set of instructions that are implemented in software. In the case of Tornado Cash, its code exists as part of Ethereum, a widely used cryptocurrency that can also run arbitrary computer code.

Cryptocurrency zealots thought that DAOs would allow them to place their trust in secure computer code, which would do exactly what they wanted it to do, rather than fallible human beings who might fail or cheat. Humans could still have input, but under rules that were enshrined in self-running software. The past several years of DAO activity has taught these zealots a series of painful and expensive lessons on the limits of both computer security and incomplete contracts: Software has bugs, and contracts may do weird things under unanticipated circumstances. The combination frequently results in multimillion-dollar frauds and thefts.

Further complicating the matter is that individual DAOs can have very different rules. DAOs were supposed to create truly decentralized services that could never turn into a source of state power and coercion. Today, some DAOs talk a big game about decentralization, but provide power to founders and big investors like Andreessen Horowitz. Others are deliberately set up to frustrate outside control. Indeed, the creators of Tornado Cash explicitly wanted to create a golem-like entity that would be immune from law. In doing so, they were following in a long libertarian tradition.

In 2014, Gavin Woods, one of Ethereum’s core developers, gave a talk on what he called “allegality” of decentralized software services. Woods’s argument was very simple. Companies like PayPal employ real people and real lawyers. That meant that “if they provide a service to you that is deemed wrong or illegal … then they get fucked … maybe [go] to prison.” But cryptocurrencies like Bitcoin “had no operator.” By using software running on blockchains rather than people to run your organization, you could do an end-run around normal, human law. You could create services that “cannot be shut down. Not by a court, not by a police force, not by a nation state.” People would be able to set whatever rules they wanted, regardless of what any government prohibited.

Woods’s speech helped inspire the first DAO (The DAO), and his ideas live on in Tornado Cash. Tornado Cash was designed, in its founder’s words, “to be unstoppable.” The way the protocol is “designed, decentralized and autonomous …[,] there’s nobody in charge.” The people who ran Tornado Cash used a decentralized protocol running on the Ethereum computing platform, which is itself radically decentralized. But they used indelible ink. The protocol was deliberately instructed never to accept an update command.

Other elements of Tornado Cash—­its website, and the GitHub repository where its source code was stored—­have been taken down. But the protocol that actually mixes cryptocurrency is still available through the Ethereum network, even if it doesn’t have a user-friendly front end. Like a golem that has been set in motion, it will just keep on going, taking in, processing, and returning cryptocurrency according to its original instructions.

This gets us to the argument that the US government, by sanctioning a software program, is restraining free speech. Not only is it more complicated than that, but it’s complicated in ways that undercut this argument. OFAC’s actions aren’t aimed against free speech and the publication of source code, as its clarifications have made clear. Researchers are not prohibited from copying, posting, “discussing, teaching about, or including open-source code in written publications, such as textbooks.” GitHub could potentially still host the source code and the project. OFAC’s actions are aimed at preventing persons from using software applications that undercut one of the most basic functions of government: regulating activities that it deems endangers national security.

The question is whether the First Amendment covers golems. When your words are used not to persuade or argue, but to animate a mindless entity that will exist as long as the Ethereum blockchain exists and will carry out your final instructions no matter what, should your golem be immune from legal action?

When Patel issued her famous ruling, she caustically dismissed the argument that “even one drop of ‘direct functionality’” overwhelmed people’s expressive rights. Arguably, the question with Tornado Cash is whether a possibly notional droplet of free speech expressivity can overwhelm the direct functionality of running code, especially code designed to refuse any further human intervention. The Tornado Cash protocol will accept and implement the routine commands described by its protocol: It will still launder cryptocurrency. But the protocol itself is frozen.

We certainly don’t think that the US government should ban DAOs or code running on Ethereum or other blockchains, or demand any universal right of access to their workings. That would be just as sweeping—and wrong—as the general claim that encrypted messaging results in a “lawless space,” or the contrary notion that regulating code is always a prior restraint on free speech. There is wide scope for legitimate disagreement about government regulation of code and its legal authorities over distributed systems.

However, it’s hard not to sympathize with OFAC’s desire to push back against a radical effort to undermine the very idea of government authority. What would happen if the Tornado Cash approach to the law prevailed? That is, what would be the outcome if judges and politicians decided that entities like Tornado Cash could not be regulated, on free speech or any other grounds?

Likely, anyone who wanted to facilitate illegal activities would have a strong incentive to turn their operation into a DAO—and then throw away the key. Ethereum’s programming language is Turing-complete. That means, as Woods argued back in 2014, that one could turn all kinds of organizational rules into software, whether or not they were against the law.

In practice, it wouldn’t be so easy. Turning business principles into running code is hard, and doing it without creating bugs or loopholes is much harder still. Ethereum and other blockchains still have hard limits on computing power. But human ingenuity can accomplish many things when there’s a lot of money at stake.

People have legitimate reasons for seeking anonymity in their financial transactions, but these reasons need to be weighed against other harms to society. As privacy advocate Cory Doctorow wrote recently: “When you combine anonymity with finance—­not the right to speak anonymously, but the right to run an investment fund anonymously—you’re rolling out the red carpet for serial scammers, who can run a scam, get caught, change names, and run it again, incorporating the lessons they learned.”

It’s a mistake to defend DAOs on the grounds that code is free speech. Some code is speech, but not all code is speech. And code can also directly affect the world. DAOs, which are in essence autonomous golems, made from code rather than clay, make this distinction especially stark.

This will become even more important as robots become more capable and prevalent. Robots are even more obviously golems than DAOs are, performing actions in the physical world. Should their code enjoy a safe harbor from the law? What if robots, like DAOs, are designed to obey only their initial instructions, however unlawful­—and refuse all further updates or commands? Assuming that code is free speech and only free speech, and ignoring its functional purpose, will at best tangle the law up in knots.

Tying free speech arguments to the cause of DAOs like Tornado Cash imperils some of the important free speech victories that were won in the past. But the risks for everyone might be even greater if that argument wins. A world where democratic governments are unable to enforce their laws is not a world where civic spaces or civil liberties will thrive.

This essay was written with Henry Farrell, and previously appeared on Lawfare.com.