Tag Archives: courts

Canadian Citizen Gets Phone Back from Police

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/01/canadian-citizen-gets-phone-back-from-police.html

After 175 million failed password guesses, a judge rules that the Canadian police must return a suspect’s phone.

[Judge] Carter said the investigation can continue without the phones, and he noted that Ottawa police have made a formal request to obtain more data from Google.

“This strikes me as a potentially more fruitful avenue of investigation than using brute force to enter the phones,” he said.

AI and Microdirectives

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/ai-and-microdirectives.html

Imagine a future in which AIs automatically interpret—and enforce—laws.

All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.

Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow.

This future may not be far off—automatic detection of lawbreaking is nothing new. Speed cameras and traffic-light cameras have been around for years. These systems automatically issue citations to the car’s owner based on the license plate. In such cases, the defendant is presumed guilty unless they prove otherwise, by naming and notifying the driver.

In New York, AI systems equipped with facial recognition technology are being used by businesses to identify shoplifters. Similar AI-powered systems are being used by retailers in Australia and the United Kingdom to identify shoplifters and provide real-time tailored alerts to employees or security personnel. China is experimenting with even more powerful forms of automated legal enforcement and targeted surveillance.

Breathalyzers are another example of automatic detection. They estimate blood alcohol content by calculating the number of alcohol molecules in the breath via an electrochemical reaction or infrared analysis (they’re basically computers with fuel cells or spectrometers attached). And they’re not without controversy: Courts across the country have found serious flaws and technical deficiencies with Breathalyzer devices and the software that powers them. Despite this, criminal defendants struggle to obtain access to devices or their software source code, with Breathalyzer companies and courts often refusing to grant such access. In the few cases where courts have actually ordered such disclosures, that has usually followed costly legal battles spanning many years.

AI is about to make this issue much more complicated, and could drastically expand the types of laws that can be enforced in this manner. Some legal scholars predict that computationally personalized law and its automated enforcement are the future of law. These would be administered by what Anthony Casey and Anthony Niblett call “microdirectives,” which provide individualized instructions for legal compliance in a particular scenario.

Made possible by advances in surveillance, communications technologies, and big-data analytics, microdirectives will be a new and predominant form of law shaped largely by machines. They are “micro” because they are not impersonal general rules or standards, but tailored to one specific circumstance. And they are “directives” because they prescribe action or inaction required by law.

A Digital Millennium Copyright Act takedown notice is a present-day example of a microdirective. The DMCA’s enforcement is almost fully automated, with copyright “bots” constantly scanning the internet for copyright-infringing material, and automatically sending literally hundreds of millions of DMCA takedown notices daily to platforms and users. A DMCA takedown notice is tailored to the recipient’s specific legal circumstances. It also directs action—remove the targeted content or prove that it’s not infringing—based on the law.

It’s easy to see how the AI systems being deployed by retailers to identify shoplifters could be redesigned to employ microdirectives. In addition to alerting business owners, the systems could also send alerts to the identified persons themselves, with tailored legal directions or notices.

A future where AIs interpret, apply, and enforce most laws at societal scale like this will exponentially magnify problems around fairness, transparency, and freedom. Forget about software transparency—well-resourced AI firms, like Breathalyzer companies today, would no doubt ferociously guard their systems for competitive reasons. These systems would likely be so complex that even their designers would not be able to explain how the AIs interpret and apply the law—something we’re already seeing with today’s deep learning neural network systems, which are unable to explain their reasoning.

Even the law itself could become hopelessly vast and opaque. Legal microdirectives sent en masse for countless scenarios, each representing authoritative legal findings formulated by opaque computational processes, could create an expansive and increasingly complex body of law that would grow ad infinitum.

And this brings us to the heart of the issue: If you’re accused by a computer, are you entitled to review that computer’s inner workings and potentially challenge its accuracy in court? What does cross-examination look like when the prosecutor’s witness is a computer? How could you possibly access, analyze, and understand all microdirectives relevant to your case in order to challenge the AI’s legal interpretation? How could courts hope to ensure equal application of the law? Like the man from the country in Franz Kafka’s parable in The Trial, you’d die waiting for access to the law, because the law is limitless and incomprehensible.

This system would present an unprecedented threat to freedom. Ubiquitous AI-powered surveillance in society will be necessary to enable such automated enforcement. On top of that, research—including empirical studies conducted by one of us (Penney)—has shown that personalized legal threats or commands that originate from sources of authority—state or corporate—can have powerful chilling effects on people’s willingness to speak or act freely. Imagine receiving very specific legal instructions from law enforcement about what to say or do in a situation: Would you feel you had a choice to act freely?

This is a vision of AI’s invasive and Byzantine law of the future that chills to the bone. It would be unlike any other law system we’ve seen before in human history, and far more dangerous for our freedoms. Indeed, some legal scholars argue that this future would effectively be the death of law.

Yet it is not a future we must endure. Proposed bans on surveillance technology like facial recognition systems can be expanded to cover those enabling invasive automated legal enforcement. Laws can mandate interpretability and explainability for AI systems to ensure everyone can understand and explain how the systems operate. If a system is too complex, maybe it shouldn’t be deployed in legal contexts. Enforcement by personalized legal processes needs to be highly regulated to ensure oversight, and should be employed only where chilling effects are less likely, like in benign government administration or regulatory contexts where fundamental rights and freedoms are not at risk.

AI will inevitably change the course of law. It already has. But we don’t have to accept its most extreme and maximal instantiations, either today or tomorrow.

This essay was written with Jon Penney, and previously appeared on Slate.com.

Class-Action Lawsuit for Scraping Data without Permission

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/class-action-lawsuit-for-scraping-data-without-permission.html

I have mixed feelings about this class-action lawsuit against OpenAI and Microsoft, claiming that it “scraped 300 billion words from the internet” without either registering as a data broker or obtaining consent. On the one hand, I want this to be a protected fair use of public data. On the other hand, I want us all to be compensated for our uniquely human ability to generate language.

There’s an interesting wrinkle on this. A recent paper showed that using AI generated text to train another AI invariably “causes irreversible defects.” From a summary:

The tails of the original content distribution disappear. Within a few generations, text becomes garbage, as Gaussian distributions converge and may even become delta functions. We call this effect model collapse.

Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah. This will make it harder to train newer models by scraping the web, giving an advantage to firms which already did that, or which control access to human interfaces at scale. Indeed, we already see AI startups hammering the Internet Archive for training data.

This is the same idea that Ted Chiang wrote about: that ChatGPT is a “blurry JPEG of all the text on the Web.” But the paper includes the math that proves the claim.

What this means is that text from before last year—text that is known human-generated—will become increasingly valuable.

Fines as a Security System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/02/fines-as-a-security-system.html

Tile has an interesting security solution to make its tracking tags harder to use for stalking:

The Anti-Theft Mode feature will make the devices invisible to Scan and Secure, the company’s in-app feature that lets you know if any nearby Tiles are following you. But to activate the new Anti-Theft Mode, the Tile owner will have to verify their real identity with a government-issued ID, submit a biometric scan that helps root out fake IDs, agree to let Tile share their information with law enforcement and agree to be subject to a $1 million penalty if convicted in a court of law of using Tile for criminal activity. So although it technically makes the device easier for stalkers to use Tiles silently, it makes the penalty of doing so high enough to (at least in theory) deter them from trying.

Interesting theory. But it won’t work against attackers who don’t have any money.

Hulls believes the approach is superior to Apple’s solution with AirTag, which emits a sound and notifies iPhone users that one of the trackers is following them.

My complaint about the technical solutions is that they only work for users of the system. Tile security requires an “in-app feature.” Apple’s AirTag “notifies iPhone users.” What we need is a common standard that is implemented on all smartphones, so that people who don’t use the trackers can be alerted if they are being surveilled by one of them.

Kevin Mitnick Hacked California Law in 1983

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/kevin-mitnick-hacked-california-law-in-1983.html

Early in his career, Kevin Mitnick successfully hacked California law. He told me the story when he heard about my new book, which he partially recounts his 2012 book, Ghost in the Wires.

The setup is that he just discovered that there’s warrant for his arrest by the California Youth Authority, and he’s trying to figure out if there’s any way out of it.

As soon as I was settled, I looked in the Yellow Pages for the nearest law school, and spent the next few days and evenings there poring over the Welfare and Institutions Code, but without much hope.

Still, hey, “Where there’s a will…” I found a provision that said that for a nonviolent crime, the jurisdiction of the Juvenile Court expired either when the defendant turned twenty-one or two years after the commitment date, whichever occurred later. For me, that would mean two years from February 1983, when I had been sentenced to the three years and eight months.

Scratch, scratch. A little arithmetic told me that this would occur in about four months. I thought, What if I just disappear until their jurisdiction ends?

This was the Southwestern Law School in Los Angeles. This was a lot of manual research—no search engines in those days. He researched the relevant statutes, and case law that interpreted those statutes. He made copies of everything to hand to his attorney.

I called my attorney to try out the idea on him. His response sounded testy: “You’re absolutely wrong. It’s a fundamental principle of law that if a defendant disappears when there’s a warrant out for him, the time limit is tolled until he’s found, even if it’s years later.”

And he added, “You have to stop playing lawyer. I’m the lawyer. Let me do my job.”

I pleaded with him to look into it, which annoyed him, but he finally agreed. When I called back two days later, he had talked to my Parole Officer, Melvin Boyer, the compassionate guy who had gotten me transferred out of the dangerous jungle at LA County Jail. Boyer had told him, “Kevin is right. If he disappears until February 1985, there’ll be nothing we can do. At that point the warrant will expire, and he’ll be off the hook.”

So he moved to Northern California and lived under an assumed name for four months.

What’s interesting to me is how he approaches legal code in the same way a hacker approaches computer code: pouring over the details, looking for a bug—a mistake—leading to an exploitable vulnerability. And this was in the days before you could do any research online. He’s spending days in the law school library.

This is exactly the sort of thing I am writing about in A Hacker’s Mind. Legal code isn’t the same as computer code, but it’s a series of rules with inputs and outputs. And just like computer code, legal code has bugs. And some of those bugs are also vulnerabilities. And some of those vulnerabilities can be exploited—just as Mitnick learned.

Mitnick was a hacker. His attorney was not.

On Alec Baldwin’s Shooting

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/on-alec-baldwins-shooting.html

We recently learned that Alec Baldwin is being charged with involuntary manslaughter for his accidental shooting on a movie set. I don’t know the details of the case, nor the intricacies of the law, but I have a question about movie props.

Why was an actual gun used on the set? And why were actual bullets used on the set? Why wasn’t it a fake gun: plastic, or metal without a working barrel? Why does it have to fire blanks? Why can’t everyone just pretend, and let someone add the bang and the muzzle flash in post-production?

Movies are filled with fakery. The light sabers in Star Wars weren’t real; the lighting effects and “wooj-wooj” noises were add afterwards. The phasers in Star Trek weren’t real either. Jar Jar Binks was 100% computer generated. So were a gazillion “props” from the Harry Potter movies. Even regular, non-SF non-magical movies have special effects. They’re easy.

Why are guns different?

Decarbonizing Cryptocurrencies through Taxation

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/decarbonizing-cryptocurrencies-through-taxation.html

Maintaining bitcoin and other cryptocurrencies causes about 0.3 percent of global CO2 emissions. That may not sound like a lot, but it’s more than the emissions of Switzerland, Croatia, and Norway combined. As many cryptocurrencies crash and the FTX bankruptcy moves into the litigation stage, regulators are likely to scrutinize the cryptocurrency world more than ever before. This presents a perfect opportunity to curb their environmental damage.

The good news is that cryptocurrencies don’t have to be carbon intensive. In fact, some have near-zero emissions. To encourage polluting currencies to reduce their carbon footprint, we need to force buyers to pay for their environmental harms through taxes.

The difference in emissions among cryptocurrencies comes down to how they create new coins. Bitcoin and other high emitters use a system called “proof of work“: to generate coins, participants, or “miners,” have to solve math problems that demand extraordinary computing power. This allows currencies to maintain their decentralized ledger—the blockchain—but requires enormous amounts of energy.

Greener alternatives exist. Most notably, the “proof of stake” system enables participants to maintain their blockchain by depositing cryptocurrency holdings in a pool. When the second-largest cryptocurrency, Ethereum, switched from proof of work to proof of stake earlier this year, its energy consumption dropped by more than 99.9% overnight.

Bitcoin and other cryptocurrencies probably won’t follow suit unless forced to, because proof of work offers massive profits to miners—and they’re the ones with power in the system. Multiple legislative levers could be used to entice them to change.

The most blunt solution is to ban cryptocurrency mining altogether. China did this in 2018, but it only made the problem worse; mining moved to other countries with even less efficient energy generation, and emissions went up. The only way for a mining ban to meaningfully reduce carbon emissions is to enact it across most of the globe. Achieving that level of international consensus is, to say the least, unlikely.

A second solution is to prohibit the buying and selling of proof-of-work currencies. The European Parliament’s Committee on Economic and Monetary Affairs considered making such a proposal, but voted against it in March. This is understandable; as with a mining ban, it would be both viewed as paternalistic and difficult to implement politically.

Employing a tax instead of an outright ban would largely skirt these issues. As with taxes on gasoline, tobacco, plastics, and alcohol, a cryptocurrency tax could reduce real-world harm by making consumers pay for it.

Most ways of taxing cryptocurrencies would be inefficient, because they’re easy to circumvent and hard to enforce. To avoid these pitfalls, the tax should be levied as a fixed percentage of each proof-of-work-cryptocurrency purchase. Cryptocurrency exchanges should collect the tax, just as merchants collect sales taxes from customers before passing the sum on to governments. To make it harder to evade, the tax should apply regardless of how the proof-of-work currency is being exchanged—whether for a fiat currency or another cryptocurrency. Most important, any state that implements the tax should target all purchases by citizens in its jurisdiction, even if they buy through exchanges with no legal presence in the country.

This sort of tax would be transparent and easy to enforce. Because most people buy cryptocurrencies from one of only a few large exchanges—such as Binance, Coinbase, and Kraken—auditing them should be cheap enough that it pays for itself. If an exchange fails to comply, it should be banned.

Even a small tax on proof-of-work currencies would reduce their damage to the planet. Imagine that you’re new to cryptocurrency and want to become a first-time investor. You’re presented with a range of currencies to choose from: bitcoin, ether, litecoin, monero, and others. You notice that all of them except ether add an environmental tax to your purchase price. Which one do you buy?

Countries don’t need to coordinate across borders for a proof-of-work tax on their own citizens to be effective. But early adopters should still consider ways to encourage others to come on board. This has precedent. The European Union is trying to influence global policy with its carbon border adjustments, which are designed to discourage people from buying carbon-intensive products abroad in order to skirt taxes. Similar rules for a proof-of-work tax could persuade other countries to adopt one.

Of course, some people will try to evade the tax, just as people evade every other tax. For example, people might buy tax-free coins on centralized exchanges and then swap them for polluting coins on decentralized exchanges. To some extent, this is inevitable; no tax is perfect. But the effort and technical know-how needed to evade a proof-of-work tax will be a major deterrent.

Even if only a few countries implement this tax—and even if some people evade it—the desirability of bitcoin will fall globally, and the environmental benefit will be significant. A high enough tax could also cause a self-reinforcing cycle that will drive down these cryptocurrencies’ prices. Because the value of many cryptocurrencies rely largely on speculation, they are dependent on future buyers. When speculators are deterred by the tax, the lack of demand will cause the price of bitcoin to fall, which could prompt more current holders to sell—further lowering prices and accelerating the effect. Declining prices will pressure the bitcoin community to abandon proof of work altogether.

Taxing proof-of-work exchanges might hurt them in the short run, but it would not hinder blockchain innovation. Instead, it would redirect innovation toward greener cryptocurrencies. This is no different than how government incentives for electric vehicles encourage carmakers to improve green alternatives to the internal combustion engine. These incentives don’t restrict innovation in automobiles—they promote it.

Taxing environmentally harmful cryptocurrencies can gain support across the political spectrum, from people with varied interests. It would benefit blockchain innovators and cryptocurrency researchers by shifting focus from environmental harm to beneficial uses of the technology. It has the potential to make our planet significantly greener. It would increase government revenues.

Even bitcoin maximalists have reason to embrace the proposal: it would offer the bitcoin community a chance to prove it can survive and grow sustainably.

This essay was written with Christos Porios, and previously appeared in the Atlantic.

Hacking Trespass Law

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/12/hacking-trespass-law.html

This article talks about public land in the US that is completely surrounded by private land, which in some cases makes it inaccessible to the public. But there’s a hack:

Some hunters have long believed, however, that the publicly owned parcels on Elk Mountain can be legally reached using a practice called corner-crossing.

Corner-crossing can be visualized in terms of a checkerboard. Ever since the Westward Expansion, much of the Western United States has been divided into alternating squares of public and private land. Corner-crossers, like checker pieces, literally step from one public square to another in diagonal fashion, avoiding trespassing charges. The practice is neither legal nor illegal. Most states discourage it, but none ban it.

It’s an interesting ambiguity in the law: does checker trespass on white squares when it moves diagonally over black squares? But, of course, the legal battle isn’t really about that. It’s about the rights of property owners vs the rights of those who wish to walk on this otherwise-inaccessible public land.

This particular hack will be adjudicated in court. State court, I think, which means the answer might be different in different states. It’s not an example I discuss in my new book, but it’s similar to many I do discuss. It’s the act of adjudicating hacks that allows systems to evolve.

Apple’s Device Analytics Can Identify iCloud Users

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/apples-device-analytics-can-identify-icloud-users.html

Researchers claim that supposedly anonymous device analytics information can identify users:

On Twitter, security researchers Tommy Mysk and Talal Haj Bakry have found that Apple’s device analytics data includes an iCloud account and can be linked directly to a specific user, including their name, date of birth, email, and associated information stored on iCloud.

Apple has long claimed otherwise:

On Apple’s device analytics and privacy legal page, the company says no information collected from a device for analytics purposes is traceable back to a specific user. “iPhone Analytics may include details about hardware and operating system specifications, performance statistics, and data about how you use your devices and applications. None of the collected information identifies you personally,” the company claims.

Apple was just sued for tracking iOS users without their consent, even when they explicitly opt out of tracking.

The Conviction of Uber’s Chief Security Officer

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/the-conviction-of-ubers-chief-security-officer.html

I have been meaning to write about Joe Sullivan, Uber’s former Chief Security Officer. He was convicted of crimes related to covering up a cyberattack against Uber. It’s a complicated case, and I’m not convinced that he deserved a guilty ruling or that it’s a good thing for the industry.

I may still write something, but until then, this essay on the topic is worth reading.

Regulating DAOs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/regulating-daos.html

In August, the US Treasury’s Office of Foreign Assets Control (OFAC) sanctioned the cryptocurrency platform Tornado Cash, a virtual currency “mixer” designed to make it harder to trace cryptocurrency transactions—and a worldwide favorite money-laundering platform. Americans are now forbidden from using it. According to the US government, Tornado Cash was sanctioned because it allegedly laundered over $7 billion in cryptocurrency, $455 million of which was stolen by a North Korean state-sponsored hacking group.

Tornado Cash is not a traditional company run by human beings, but instead a series of “smart contracts”: self-executing code that exists only as software. Critics argue that prohibiting Americans from using Tornado Cash is a restraint of free speech, pointing to court rulings in the 1990s that established that computer language is a form of language, and that software programs are a form of speech. They also suggest that the Treasury Department has the authority to sanction only humans and not software.

We think that the most useful way to understand the speech issues involved with regulating Tornado Cash and other decentralized autonomous organizations (DAOs) is through an analogy: the golem. There are many versions of the Jewish golem legend, but in most of them, a person-like clay statue comes to life after someone writes the word “truth” in Hebrew on its forehead, and eventually starts doing terrible things. The golem stops only when a rabbi erases one of those letters, turning “truth” into the Hebrew word for “death,” and the golem ceases to function.

The analogy between DAOs and golems is quite precise, and has important consequences for the relationship between free speech and code. Ultimately, just as the golem needed the intervention of a rabbi to stop wreaking havoc on the world, so too do DAOs need to be subject to regulation.

The equivalency of code and free speech was established during the first “crypto wars” of the 1990s, which were about cryptography, not cryptocurrencies. US agencies tried to use export control laws to prevent sophisticated cryptography software from being exported outside the US. Activists and lawyers cleverly showed how code could be transformed into speech and vice versa, turning the source code for a cryptographic product into a printed book and daring US authorities to prevent its export. In 1996, US District Judge Marilyn Hall Patel ruled that computer code is a language, just like German or French, and that coded programs deserve First Amendment protection. That such code is also functional, instructing a computer to do something, was irrelevant to its expressive capabilities, according to Patel’s ruling. However, both a concurring and dissenting opinion argued that computer code also has the “functional purpose of controlling computers and, in that regard, does not command protection under the First Amendment.”

This disagreement highlights the awkward distinction between ordinary language and computer code. Language does not change the world, except insofar as it persuades, informs, or compels other people. Code, however, is a language where words have inherent power. Type the appropriate instructions and the computer will implement them without hesitation, second-guessing, or independence of will. They are like the words inscribed on a golem’s forehead (or the written instructions that, in some versions of the folklore, are placed in its mouth). The golem has no choice, because it is incapable of making choices. The words are code, and the golem is no different from a computer.

Unlike ordinary organizations, DAOs don’t rely on human beings to carry out many of their core functions. Instead, those functions have been translated into a set of instructions that are implemented in software. In the case of Tornado Cash, its code exists as part of Ethereum, a widely used cryptocurrency that can also run arbitrary computer code.

Cryptocurrency zealots thought that DAOs would allow them to place their trust in secure computer code, which would do exactly what they wanted it to do, rather than fallible human beings who might fail or cheat. Humans could still have input, but under rules that were enshrined in self-running software. The past several years of DAO activity has taught these zealots a series of painful and expensive lessons on the limits of both computer security and incomplete contracts: Software has bugs, and contracts may do weird things under unanticipated circumstances. The combination frequently results in multimillion-dollar frauds and thefts.

Further complicating the matter is that individual DAOs can have very different rules. DAOs were supposed to create truly decentralized services that could never turn into a source of state power and coercion. Today, some DAOs talk a big game about decentralization, but provide power to founders and big investors like Andreessen Horowitz. Others are deliberately set up to frustrate outside control. Indeed, the creators of Tornado Cash explicitly wanted to create a golem-like entity that would be immune from law. In doing so, they were following in a long libertarian tradition.

In 2014, Gavin Woods, one of Ethereum’s core developers, gave a talk on what he called “allegality” of decentralized software services. Woods’s argument was very simple. Companies like PayPal employ real people and real lawyers. That meant that “if they provide a service to you that is deemed wrong or illegal … then they get fucked … maybe [go] to prison.” But cryptocurrencies like Bitcoin “had no operator.” By using software running on blockchains rather than people to run your organization, you could do an end-run around normal, human law. You could create services that “cannot be shut down. Not by a court, not by a police force, not by a nation state.” People would be able to set whatever rules they wanted, regardless of what any government prohibited.

Woods’s speech helped inspire the first DAO (The DAO), and his ideas live on in Tornado Cash. Tornado Cash was designed, in its founder’s words, “to be unstoppable.” The way the protocol is “designed, decentralized and autonomous …[,] there’s nobody in charge.” The people who ran Tornado Cash used a decentralized protocol running on the Ethereum computing platform, which is itself radically decentralized. But they used indelible ink. The protocol was deliberately instructed never to accept an update command.

Other elements of Tornado Cash—­its website, and the GitHub repository where its source code was stored—­have been taken down. But the protocol that actually mixes cryptocurrency is still available through the Ethereum network, even if it doesn’t have a user-friendly front end. Like a golem that has been set in motion, it will just keep on going, taking in, processing, and returning cryptocurrency according to its original instructions.

This gets us to the argument that the US government, by sanctioning a software program, is restraining free speech. Not only is it more complicated than that, but it’s complicated in ways that undercut this argument. OFAC’s actions aren’t aimed against free speech and the publication of source code, as its clarifications have made clear. Researchers are not prohibited from copying, posting, “discussing, teaching about, or including open-source code in written publications, such as textbooks.” GitHub could potentially still host the source code and the project. OFAC’s actions are aimed at preventing persons from using software applications that undercut one of the most basic functions of government: regulating activities that it deems endangers national security.

The question is whether the First Amendment covers golems. When your words are used not to persuade or argue, but to animate a mindless entity that will exist as long as the Ethereum blockchain exists and will carry out your final instructions no matter what, should your golem be immune from legal action?

When Patel issued her famous ruling, she caustically dismissed the argument that “even one drop of ‘direct functionality’” overwhelmed people’s expressive rights. Arguably, the question with Tornado Cash is whether a possibly notional droplet of free speech expressivity can overwhelm the direct functionality of running code, especially code designed to refuse any further human intervention. The Tornado Cash protocol will accept and implement the routine commands described by its protocol: It will still launder cryptocurrency. But the protocol itself is frozen.

We certainly don’t think that the US government should ban DAOs or code running on Ethereum or other blockchains, or demand any universal right of access to their workings. That would be just as sweeping—and wrong—as the general claim that encrypted messaging results in a “lawless space,” or the contrary notion that regulating code is always a prior restraint on free speech. There is wide scope for legitimate disagreement about government regulation of code and its legal authorities over distributed systems.

However, it’s hard not to sympathize with OFAC’s desire to push back against a radical effort to undermine the very idea of government authority. What would happen if the Tornado Cash approach to the law prevailed? That is, what would be the outcome if judges and politicians decided that entities like Tornado Cash could not be regulated, on free speech or any other grounds?

Likely, anyone who wanted to facilitate illegal activities would have a strong incentive to turn their operation into a DAO—and then throw away the key. Ethereum’s programming language is Turing-complete. That means, as Woods argued back in 2014, that one could turn all kinds of organizational rules into software, whether or not they were against the law.

In practice, it wouldn’t be so easy. Turning business principles into running code is hard, and doing it without creating bugs or loopholes is much harder still. Ethereum and other blockchains still have hard limits on computing power. But human ingenuity can accomplish many things when there’s a lot of money at stake.

People have legitimate reasons for seeking anonymity in their financial transactions, but these reasons need to be weighed against other harms to society. As privacy advocate Cory Doctorow wrote recently: “When you combine anonymity with finance—­not the right to speak anonymously, but the right to run an investment fund anonymously—you’re rolling out the red carpet for serial scammers, who can run a scam, get caught, change names, and run it again, incorporating the lessons they learned.”

It’s a mistake to defend DAOs on the grounds that code is free speech. Some code is speech, but not all code is speech. And code can also directly affect the world. DAOs, which are in essence autonomous golems, made from code rather than clay, make this distinction especially stark.

This will become even more important as robots become more capable and prevalent. Robots are even more obviously golems than DAOs are, performing actions in the physical world. Should their code enjoy a safe harbor from the law? What if robots, like DAOs, are designed to obey only their initial instructions, however unlawful­—and refuse all further updates or commands? Assuming that code is free speech and only free speech, and ignoring its functional purpose, will at best tangle the law up in knots.

Tying free speech arguments to the cause of DAOs like Tornado Cash imperils some of the important free speech victories that were won in the past. But the risks for everyone might be even greater if that argument wins. A world where democratic governments are unable to enforce their laws is not a world where civic spaces or civil liberties will thrive.

This essay was written with Henry Farrell, and previously appeared on Lawfare.com.

Spyware Maker Intellexa Sued by Journalist

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/spyware-maker-intellexa-sued-by-journalist.html

The Greek journalist Thanasis Koukakis was spied on by his own government, with a commercial spyware product called “Predator.” That product is sold by a company in North Macedonia called Cytrox, which is in turn owned by an Israeli company called Intellexa.

Koukakis is suing Intellexa.

The lawsuit filed by Koukakis takes aim at Intellexa and its executive, alleging a criminal breach of privacy and communication laws, reports Haaretz. The founder of Intellexa, a former Israeli intelligence commander named Taj Dilian, is listed as one of the defendants in the suit, as is another shareholder, Sara Hemo, and the firm itself. The objective of the suit, Koukakis says, is to spur an investigation to determine whether a criminal indictment should be brought against the defendants.

Why does it always seem to be Israel? The world would be a much safer place if that government stopped this cyberweapons arms trade from inside its borders.

Facebook Has No Idea What Data It Has

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/09/facebook-has-no-idea-what-data-it-has.html

This is from a court deposition:

Facebook’s stonewalling has been revealing on its own, providing variations on the same theme: It has amassed so much data on so many billions of people and organized it so confusingly that full transparency is impossible on a technical level. In the March 2022 hearing, Zarashaw and Steven Elia, a software engineering manager, described Facebook as a data-processing apparatus so complex that it defies understanding from within. The hearing amounted to two high-ranking engineers at one of the most powerful and resource-flush engineering outfits in history describing their product as an unknowable machine.

The special master at times seemed in disbelief, as when he questioned the engineers over whether any documentation existed for a particular Facebook subsystem. “Someone must have a diagram that says this is where this data is stored,” he said, according to the transcript. Zarashaw responded: “We have a somewhat strange engineering culture compared to most where we don’t generate a lot of artifacts during the engineering process. Effectively the code is its own design document often.” He quickly added, “For what it’s worth, this is terrifying to me when I first joined as well.”

[…]

Facebook’s inability to comprehend its own functioning took the hearing up to the edge of the metaphysical. At one point, the court-appointed special master noted that the “Download Your Information” file provided to the suit’s plaintiffs must not have included everything the company had stored on those individuals because it appears to have no idea what it truly stores on anyone. Can it be that Facebook’s designated tool for comprehensively downloading your information might not actually download all your information? This, again, is outside the boundaries of knowledge.

“The solution to this is unfortunately exactly the work that was done to create the DYI file itself,” noted Zarashaw. “And the thing I struggle with here is in order to find gaps in what may not be in DYI file, you would by definition need to do even more work than was done to generate the DYI files in the first place.”

The systemic fogginess of Facebook’s data storage made answering even the most basic question futile. At another point, the special master asked how one could find out which systems actually contain user data that was created through machine inference.

“I don’t know,” answered Zarashaw. “It’s a rather difficult conundrum.”

I’m not surprised. These systems are so complex that no humans understand them anymore. That allows us to do things we couldn’t do otherwise, but it’s also a problem.

EDITED TO ADD: Another article.

The Justice Department Will No Longer Charge Security Researchers with Criminal Hacking

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/05/the-justice-department-will-no-longer-charge-security-researchers-with-criminal-hacking.html

Following a recent Supreme Court ruling, the Justice Department will no longer prosecute “good faith” security researchers with cybercrimes:

The policy for the first time directs that good-faith security research should not be charged. Good faith security research means accessing a computer solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability, where such activity is carried out in a manner designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or those who use such devices, machines, or online services.

[…]

The new policy states explicitly the longstanding practice that “the department’s goals for CFAA enforcement are to promote privacy and cybersecurity by upholding the legal right of individuals, network owners, operators, and other persons to ensure the confidentiality, integrity, and availability of information stored in their information systems.” Accordingly, the policy clarifies that hypothetical CFAA violations that have concerned some courts and commentators are not to be charged. Embellishing an online dating profile contrary to the terms of service of the dating website; creating fictional accounts on hiring, housing, or rental websites; using a pseudonym on a social networking site that prohibits them; checking sports scores at work; paying bills at work; or violating an access restriction contained in a term of service are not themselves sufficient to warrant federal criminal charges. The policy focuses the department’s resources on cases where a defendant is either not authorized at all to access a computer or was authorized to access one part of a computer—such as one email account—and, despite knowing about that restriction, accessed a part of the computer to which his authorized access did not extend, such as other users’ emails.

News article.

Hackers Using Fake Police Data Requests against Tech Companies

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/04/hackers-using-fake-police-data-requests-against-tech-companies.html

Brian Krebs has a detailed post about hackers using fake police data requests to trick companies into handing over data.

Virtually all major technology companies serving large numbers of users online have departments that routinely review and process such requests, which are typically granted as long as the proper documents are provided and the request appears to come from an email address connected to an actual police department domain name.

But in certain circumstances ­– such as a case involving imminent harm or death –­ an investigating authority may make what’s known as an Emergency Data Request (EDR), which largely bypasses any official review and does not require the requestor to supply any court-approved documents.

It is now clear that some hackers have figured out there is no quick and easy way for a company that receives one of these EDRs to know whether it is legitimate. Using their illicit access to police email systems, the hackers will send a fake EDR along with an attestation that innocent people will likely suffer greatly or die unless the requested data is provided immediately.

In this scenario, the receiving company finds itself caught between two unsavory outcomes: Failing to immediately comply with an EDR -­- and potentially having someone’s blood on their hands -­- or possibly leaking a customer record to the wrong person.

Another article claims that both Apple and Facebook (or Meta, or whatever they want to be called now) fell for this scam.

We allude to this kind of risk in our 2015 “Keys Under Doormats” paper:

Third, exceptional access would create concentrated targets that could attract bad actors. Security credentials that unlock the data would have to be retained by the platform provider, law enforcement agencies, or some other trusted third party. If law enforcement’s keys guaranteed access to everything, an attacker who gained access to these keys would enjoy the same privilege. Moreover, law enforcement’s stated need for rapid access to data would make it impractical to store keys offline or split keys among multiple keyholders, as security engineers would normally do with extremely high-value credentials.

The “credentials” are even more insecure than we could have imagined: access to an email address. And the data, of course, isn’t very secure. But imagine how this kind of thing could be abused with a law enforcement encryption backdoor.

Merck Wins Insurance Lawsuit re NotPetya Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/01/merck-wins-insurance-lawsuit-re-notpetya-attack.html

The insurance company Ace American has to pay for the losses:

On 6th December 2021, the New Jersey Superior Court granted partial summary judgment (attached) in favour of Merck and International Indemnity, declaring that the War or Hostile Acts exclusion was inapplicable to the dispute.

Merck suffered US$1.4 billion in business interruption losses from the Notpetya cyber attack of 2017 which were claimed against “all risks” property re/insurance policies providing coverage for losses resulting from destruction or corruption of computer data and software.

The parties disputed whether the Notpetya malware which affected Merck’s computers in 2017 was an instrument of the Russian government, so that the War or Hostile Acts exclusion would apply to the loss.

The Court noted that Merck was a sophisticated and knowledgeable party, but there was no indication that the exclusion had been negotiated since it was in standard language. The Court, therefore, applied, under New Jersey law, the doctrine of construction of insurance contracts that gives prevalence to the reasonable expectations of the insured, even in exceptional circumstances when the literal meaning of the policy is plain.

Merck argued that the attack was not “an official state action,” which I’m surprised wasn’t successfully disputed.

Slashdot thread.

San Francisco Police Illegally Spying on Protesters

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/01/san-francisco-police-illegally-spying-on-protesters.html

Last summer, the San Francisco police illegally used surveillance cameras at the George Floyd protests. The EFF is suing the police:

This surveillance invaded the privacy of protesters, targeted people of color, and chills and deters participation and organizing for future protests. The SFPD also violated San Francisco’s new Surveillance Technology Ordinance. It prohibits city agencies like the SFPD from acquiring, borrowing, or using surveillance technology, without prior approval from the city’s Board of Supervisors, following an open process that includes public participation. Here, the SFPD went through no such process before spying on protesters with this network of surveillance cameras.

It’s feels like a pretty easy case. There’s a law, and the SF police didn’t follow it.

Tech billionaire Chris Larsen is on the side of the police. He thinks that the surveillance is a good thing, and wrote an op-ed defending it.

I wouldn’t be writing about this at all except that Chris is a board member of EPIC, and used his EPIC affiliation in the op-ed to bolster his own credentials. (Bizarrely, he linked to an EPIC page that directly contradicts his position.) In his op-ed, he mischaracterized the EFF’s actions and the facts of the lawsuit. It’s a mess.

The plaintiffs in the lawsuit wrote a good rebuttal to Larsen’s piece. And this week, EPIC published what is effectively its own rebuttal:

One of the fundamental principles that underlies EPIC’s work (and the work of many other groups) on surveillance oversight is that individuals should have the power to decide whether surveillance tools are used in their communities and to impose limits on their use. We have fought for years to shed light on the development, procurement, and deployment of such technologies and have worked to ensure that they are subject to independent oversight through hearings, legal challenges, petitions, and other public forums. The CCOPS model, which was developed by ACLU affiliates and other coalition partners in California and implemented through the San Francisco ordinance, is a powerful mechanism to enable public oversight of dangerous surveillance tools. The access, retention, and use policies put in place by the neighborhood business associations operating these networks provide necessary, but not sufficient, protections against abuse. Strict oversight is essential to promote both privacy and community safety, which includes freedom from arbitrary police action and the freedom to assemble.

So far, EPIC has not done anything about Larsen still being on its board. (Others have criticized them for keeping him on.) I don’t know if I have an opinion on this. Larsen has done good work on financial privacy regulations, which is a good thing. But he seems to be funding all these surveillance cameras in San Francisco, which is really bad.