Tag Archives: law enforcement

The Hacker Tool to Get Personal Data from Credit Bureaus

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/09/the-hacker-tool-to-get-personal-data-from-credit-bureaus.html

The new site 404 Media has a good article on how hackers are cheaply getting personal information from credit bureaus:

This is the result of a secret weapon criminals are selling access to online that appears to tap into an especially powerful set of data: the target’s credit header. This is personal information that the credit bureaus Experian, Equifax, and TransUnion have on most adults in America via their credit cards. Through a complex web of agreements and purchases, that data trickles down from the credit bureaus to other companies who offer it to debt collectors, insurance companies, and law enforcement.

A 404 Media investigation has found that criminals have managed to tap into that data supply chain, in some cases by stealing former law enforcement officer’s identities, and are selling unfettered access to their criminal cohorts online. The tool 404 Media tested has also been used to gather information on high profile targets such as Elon Musk, Joe Rogan, and even President Joe Biden, seemingly without restriction. 404 Media verified that although not always sensitive, at least some of that data is accurate.

Backdoor in TETRA Police Radios

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/backdoor-in-tetra-police-radios.html

Seems that there is a deliberate backdoor in the twenty-year-old TErrestrial Trunked RAdio (TETRA) standard used by police forces around the world.

The European Telecommunications Standards Institute (ETSI), an organization that standardizes technologies across the industry, first created TETRA in 1995. Since then, TETRA has been used in products, including radios, sold by Motorola, Airbus, and more. Crucially, TETRA is not open-source. Instead, it relies on what the researchers describe in their presentation slides as “secret, proprietary cryptography,” meaning it is typically difficult for outside experts to verify how secure the standard really is.

The researchers said they worked around this limitation by purchasing a TETRA-powered radio from eBay. In order to then access the cryptographic component of the radio itself, Wetzels said the team found a vulnerability in an interface of the radio.

[…]

Most interestingly is the researchers’ findings of what they describe as the backdoor in TEA1. Ordinarily, radios using TEA1 used a key of 80-bits. But Wetzels said the team found a “secret reduction step” which dramatically lowers the amount of entropy the initial key offered. An attacker who followed this step would then be able to decrypt intercepted traffic with consumer-level hardware and a cheap software defined radio dongle.

Looks like the encryption algorithm was intentionally weakened by intelligence agencies to facilitate easy eavesdropping.

Specifically on the researchers’ claims of a backdoor in TEA1, Boyer added “At this time, we would like to point out that the research findings do not relate to any backdoors. The TETRA security standards have been specified together with national security agencies and are designed for and subject to export control regulations which determine the strength of the encryption.”

And I would like to point out that that’s the very definition of a backdoor.

Why aren’t we done with secret, proprietary cryptography? It’s just not a good idea.

Details of the security analysis. Another news article.

New York Using AI to Detect Subway Fare Evasion

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/new-york-using-ai-to-detect-subway-fare-evasion.html

The details are scant—the article is based on a “heavily redacted” contract—but the New York subway authority is using an “AI system” to detect people who don’t pay the subway fare.

Joana Flores, an MTA spokesperson, said the AI system doesn’t flag fare evaders to New York police, but she declined to comment on whether that policy could change. A police spokesperson declined to comment.

If we spent just one-tenth of the effort we spend prosecuting the poor on prosecuting the rich, it would be a very different world.

AI and Microdirectives

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/ai-and-microdirectives.html

Imagine a future in which AIs automatically interpret—and enforce—laws.

All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.

Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow.

This future may not be far off—automatic detection of lawbreaking is nothing new. Speed cameras and traffic-light cameras have been around for years. These systems automatically issue citations to the car’s owner based on the license plate. In such cases, the defendant is presumed guilty unless they prove otherwise, by naming and notifying the driver.

In New York, AI systems equipped with facial recognition technology are being used by businesses to identify shoplifters. Similar AI-powered systems are being used by retailers in Australia and the United Kingdom to identify shoplifters and provide real-time tailored alerts to employees or security personnel. China is experimenting with even more powerful forms of automated legal enforcement and targeted surveillance.

Breathalyzers are another example of automatic detection. They estimate blood alcohol content by calculating the number of alcohol molecules in the breath via an electrochemical reaction or infrared analysis (they’re basically computers with fuel cells or spectrometers attached). And they’re not without controversy: Courts across the country have found serious flaws and technical deficiencies with Breathalyzer devices and the software that powers them. Despite this, criminal defendants struggle to obtain access to devices or their software source code, with Breathalyzer companies and courts often refusing to grant such access. In the few cases where courts have actually ordered such disclosures, that has usually followed costly legal battles spanning many years.

AI is about to make this issue much more complicated, and could drastically expand the types of laws that can be enforced in this manner. Some legal scholars predict that computationally personalized law and its automated enforcement are the future of law. These would be administered by what Anthony Casey and Anthony Niblett call “microdirectives,” which provide individualized instructions for legal compliance in a particular scenario.

Made possible by advances in surveillance, communications technologies, and big-data analytics, microdirectives will be a new and predominant form of law shaped largely by machines. They are “micro” because they are not impersonal general rules or standards, but tailored to one specific circumstance. And they are “directives” because they prescribe action or inaction required by law.

A Digital Millennium Copyright Act takedown notice is a present-day example of a microdirective. The DMCA’s enforcement is almost fully automated, with copyright “bots” constantly scanning the internet for copyright-infringing material, and automatically sending literally hundreds of millions of DMCA takedown notices daily to platforms and users. A DMCA takedown notice is tailored to the recipient’s specific legal circumstances. It also directs action—remove the targeted content or prove that it’s not infringing—based on the law.

It’s easy to see how the AI systems being deployed by retailers to identify shoplifters could be redesigned to employ microdirectives. In addition to alerting business owners, the systems could also send alerts to the identified persons themselves, with tailored legal directions or notices.

A future where AIs interpret, apply, and enforce most laws at societal scale like this will exponentially magnify problems around fairness, transparency, and freedom. Forget about software transparency—well-resourced AI firms, like Breathalyzer companies today, would no doubt ferociously guard their systems for competitive reasons. These systems would likely be so complex that even their designers would not be able to explain how the AIs interpret and apply the law—something we’re already seeing with today’s deep learning neural network systems, which are unable to explain their reasoning.

Even the law itself could become hopelessly vast and opaque. Legal microdirectives sent en masse for countless scenarios, each representing authoritative legal findings formulated by opaque computational processes, could create an expansive and increasingly complex body of law that would grow ad infinitum.

And this brings us to the heart of the issue: If you’re accused by a computer, are you entitled to review that computer’s inner workings and potentially challenge its accuracy in court? What does cross-examination look like when the prosecutor’s witness is a computer? How could you possibly access, analyze, and understand all microdirectives relevant to your case in order to challenge the AI’s legal interpretation? How could courts hope to ensure equal application of the law? Like the man from the country in Franz Kafka’s parable in The Trial, you’d die waiting for access to the law, because the law is limitless and incomprehensible.

This system would present an unprecedented threat to freedom. Ubiquitous AI-powered surveillance in society will be necessary to enable such automated enforcement. On top of that, research—including empirical studies conducted by one of us (Penney)—has shown that personalized legal threats or commands that originate from sources of authority—state or corporate—can have powerful chilling effects on people’s willingness to speak or act freely. Imagine receiving very specific legal instructions from law enforcement about what to say or do in a situation: Would you feel you had a choice to act freely?

This is a vision of AI’s invasive and Byzantine law of the future that chills to the bone. It would be unlike any other law system we’ve seen before in human history, and far more dangerous for our freedoms. Indeed, some legal scholars argue that this future would effectively be the death of law.

Yet it is not a future we must endure. Proposed bans on surveillance technology like facial recognition systems can be expanded to cover those enabling invasive automated legal enforcement. Laws can mandate interpretability and explainability for AI systems to ensure everyone can understand and explain how the systems operate. If a system is too complex, maybe it shouldn’t be deployed in legal contexts. Enforcement by personalized legal processes needs to be highly regulated to ensure oversight, and should be employed only where chilling effects are less likely, like in benign government administration or regulatory contexts where fundamental rights and freedoms are not at risk.

AI will inevitably change the course of law. It already has. But we don’t have to accept its most extreme and maximal instantiations, either today or tomorrow.

This essay was written with Jon Penney, and previously appeared on Slate.com.

Self-Driving Cars Are Surveillance Cameras on Wheels

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/self-driving-cars-are-surveillance-cameras-on-wheels.html

Police are already using self-driving car footage as video evidence:

While security cameras are commonplace in American cities, self-driving cars represent a new level of access for law enforcement ­ and a new method for encroachment on privacy, advocates say. Crisscrossing the city on their routes, self-driving cars capture a wider swath of footage. And it’s easier for law enforcement to turn to one company with a large repository of videos and a dedicated response team than to reach out to all the businesses in a neighborhood with security systems.

“We’ve known for a long time that they are essentially surveillance cameras on wheels,” said Chris Gilliard, a fellow at the Social Science Research Council. “We’re supposed to be able to go about our business in our day-to-day lives without being surveilled unless we are suspected of a crime, and each little bit of this technology strips away that ability.”

[…]

While self-driving services like Waymo and Cruise have yet to achieve the same level of market penetration as Ring, the wide range of video they capture while completing their routes presents other opportunities. In addition to the San Francisco homicide, Bloomberg’s review of court documents shows police have sought footage from Waymo and Cruise to help solve hit-and-runs, burglaries, aggravated assaults, a fatal collision and an attempted kidnapping.

In all cases reviewed by Bloomberg, court records show that police collected footage from Cruise and Waymo shortly after obtaining a warrant. In several cases, Bloomberg could not determine whether the recordings had been used in the resulting prosecutions; in a few of the cases, law enforcement and attorneys said the footage had not played a part, or was only a formality. However, video evidence has become a lynchpin of criminal cases, meaning it’s likely only a matter of time.

Identifying the Idaho Killer

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/06/identifying-the-idaho-killer.html

The New York Times has a long article on the investigative techniques used to identify the person who stabbed and killed four University of Idaho students.

Pay attention to the techniques:

The case has shown the degree to which law enforcement investigators have come to rely on the digital footprints that ordinary Americans leave in nearly every facet of their lives. Online shopping, car sales, carrying a cellphone, drives along city streets and amateur genealogy all played roles in an investigation that was solved, in the end, as much through technology as traditional sleuthing.

[…]

At that point, investigators decided to try genetic genealogy, a method that until now has been used primarily to solve cold cases, not active murder investigations. Among the growing number of genealogy websites that help people trace their ancestors and relatives via their own DNA, some allow users to select an option that permits law enforcement to compare crime scene DNA samples against the websites’ data.

A distant cousin who has opted into the system can help investigators building a family tree from crime scene DNA to triangulate and identify a potential perpetrator of a crime.

[…]

On Dec. 23, investigators sought and received Mr. Kohberger’s cellphone records. The results added more to their suspicions: His phone was moving around in the early morning hours of Nov. 13, but was disconnected from cell networks ­- perhaps turned off—in the two hours around when the killings occurred.

FBI (and Others) Shut Down Genesis Market

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/04/fbi-and-others-shut-down-genesis-market.html

Genesis Market is shut down:

Active since 2018, Genesis Market’s slogan was, “Our store sells bots with logs, cookies, and their real fingerprints.” Customers could search for infected systems with a variety of options, including by Internet address or by specific domain names associated with stolen credentials.

But earlier today, multiple domains associated with Genesis had their homepages replaced with a seizure notice from the FBI, which said the domains were seized pursuant to a warrant issued by the U.S. District Court for the Eastern District of Wisconsin.

The U.S. Attorney’s Office for the Eastern District of Wisconsin did not respond to requests for comment. The FBI declined to comment.

But sources close to the investigation tell KrebsOnSecurity that law enforcement agencies in the United States, Canada and across Europe are currently serving arrest warrants on dozens of individuals thought to support Genesis, either by maintaining the site or selling the service bot logs from infected systems.

The seizure notice includes the seals of law enforcement entities from several countries, including Australia, Canada, Denmark, Germany, the Netherlands, Spain, Sweden and the United Kingdom.

Slashdot story.

Fines as a Security System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/02/fines-as-a-security-system.html

Tile has an interesting security solution to make its tracking tags harder to use for stalking:

The Anti-Theft Mode feature will make the devices invisible to Scan and Secure, the company’s in-app feature that lets you know if any nearby Tiles are following you. But to activate the new Anti-Theft Mode, the Tile owner will have to verify their real identity with a government-issued ID, submit a biometric scan that helps root out fake IDs, agree to let Tile share their information with law enforcement and agree to be subject to a $1 million penalty if convicted in a court of law of using Tile for criminal activity. So although it technically makes the device easier for stalkers to use Tiles silently, it makes the penalty of doing so high enough to (at least in theory) deter them from trying.

Interesting theory. But it won’t work against attackers who don’t have any money.

Hulls believes the approach is superior to Apple’s solution with AirTag, which emits a sound and notifies iPhone users that one of the trackers is following them.

My complaint about the technical solutions is that they only work for users of the system. Tile security requires an “in-app feature.” Apple’s AirTag “notifies iPhone users.” What we need is a common standard that is implemented on all smartphones, so that people who don’t use the trackers can be alerted if they are being surveilled by one of them.

Bulk Surveillance of Money Transfers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/bulk-surveillance-of-money-transfers.html

Just another obscure warrantless surveillance program.

US law enforcement can access details of money transfers without a warrant through an obscure surveillance program the Arizona attorney general’s office created in 2014. A database stored at a nonprofit, the Transaction Record Analysis Center (TRAC), provides full names and amounts for larger transfers (above $500) sent between the US, Mexico and 22 other regions through services like Western Union, MoneyGram and Viamericas. The program covers data for numerous Caribbean and Latin American countries in addition to Canada, China, France, Malaysia, Spain, Thailand, Ukraine and the US Virgin Islands. Some domestic transfers also enter the data set.

[…]

You need to be a member of law enforcement with an active government email account to use the database, which is available through a publicly visible web portal. Leber told The Journal that there haven’t been any known breaches or instances of law enforcement misuse. However, Wyden noted that the surveillance program included more states and countries than previously mentioned in briefings. There have also been subpoenas for bulk money transfer data from Homeland Security Investigations (which withdrew its request after Wyden’s inquiry), the DEA and the FBI.

How is it that Arizona can be in charge of this?

Wall Street Journal podcast—with transcript—on the program. I think the original reporting was from last March, but I missed it back then.

Identifying People Using Cell Phone Location Data

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/01/identifying-people-using-cell-phone-location-data.html

The two people who shut down four Washington power stations in December were arrested. This is the interesting part:

Investigators identified Greenwood and Crahan almost immediately after the attacks took place by using cell phone data that allegedly showed both men in the vicinity of all four substations, according to court documents.

Nowadays, it seems like an obvious thing to do—although the search is probably unconstitutional. But way back in 2012, the Canadian CSEC—that’s their NSA—did some top-secret work on this kind of thing. The document is part of the Snowden archive, and I wrote about it:

The second application suggested is to identify a particular person whom you know visited a particular geographical area on a series of dates/times. The example in the presentation is a kidnapper. He is based in a rural area, so he can’t risk making his ransom calls from that area. Instead, he drives to an urban area to make those calls. He either uses a burner phone or a pay phone, so he can’t be identified that way. But if you assume that he has some sort of smart phone in his pocket that identifies itself over the Internet, you might be able to find him in that dataset. That is, he might be the only ID that appears in that geographical location around the same time as the ransom calls and at no other times.

There’s a whole lot of surveillance you can do if you can follow everyone, everywhere, all the time. I don’t even think turning your cell phone off would help in this instance. How many people in the Washington area turned their phones off during exactly the times of the Washington power station attacks? Probably a small enough number to investigate them all.

Arresting IT Administrators

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/12/arresting-it-administrators.html

This is one way of ensuring that IT keeps up with patches:

Albanian prosecutors on Wednesday asked for the house arrest of five public employees they blame for not protecting the country from a cyberattack by alleged Iranian hackers.

Prosecutors said the five IT officials of the public administration department had failed to check the security of the system and update it with the most recent antivirus software.

The next step would be to arrest managers at software companies for not releasing patches fast enough. And maybe programmers for writing buggy code. I don’t know where this line of thinking ends.

Facebook Fined $276M under GDPR

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/facebook-fined-276m-under-gdpr.html

Facebook—Meta—was just fined $276 million (USD) for a data leak that included full names, birth dates, phone numbers, and location.

Meta’s total fine by the Data Protection Commission is over $700 million. Total GDPR fines are over €2 billion (EUR) since 2018.

Hacking Automobile Keyless Entry Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/10/hacking-automobile-keyless-entry-systems.html

Suspected members of a European car-theft ring have been arrested:

The criminals targeted vehicles with keyless entry and start systems, exploiting the technology to get into the car and drive away.

As a result of a coordinated action carried out on 10 October in the three countries involved, 31 suspects were arrested. A total of 22 locations were searched, and over EUR 1 098 500 in criminal assets seized.

The criminals targeted keyless vehicles from two French car manufacturers. A fraudulent tool—marketed as an automotive diagnostic solution, was used to replace the original software of the vehicles, allowing the doors to be opened and the ignition to be started without the actual key fob.

Among those arrested feature the software developers, its resellers and the car thieves who used this tool to steal vehicles.

The article doesn’t say how the hacking tool got installed into cars. Were there crooked auto mechanics, dealers, or something else?

Using Foreign Nationals to Bypass US Surveillance Restrictions

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/01/using-foreign-nationals-to-bypass-us-surveillance-restrictions.html

Remember when the US and Australian police surreptitiously owned and operated the encrypted cell phone app ANOM? They arrested 800 people in 2021 based on that operation.

New documents received by Motherboard show that over 100 of those phones were shipped to users in the US, far more than previously believed.

What’s most interesting to me about this new information is how the US used the Australians to get around domestic spying laws:

For legal reasons, the FBI did not monitor outgoing messages from Anom devices determined to be inside the U.S. Instead, the Australian Federal Police (AFP) monitored them on behalf of the FBI, according to previously published court records. In those court records unsealed shortly before the announcement of the Anom operation, FBI Special Agent Nicholas Cheviron wrote that the FBI received Anom user data three times a week, which contained the messages of all of the users of Anom with some exceptions, including “the messages of approximately 15 Anom users in the U.S. sent to any other Anom device.”

[…]

Stewart Baker, partner at Steptoe & Johnson LLP, and Bryce Klehm, associate editor of Lawfare, previously wrote that “The ‘threat to life; standard echoes the provision of U.S. law that allows communications providers to share user data with law enforcement without legal process under 18 U.S.C. § 2702. Whether the AFP was relying on this provision of U.S. law or a more general moral imperative to take action to prevent imminent threats is not clear.” That section of law discusses the voluntary disclosure of customer communications or records.

When asked about the practice of Australian law enforcement monitoring devices inside the U.S. on behalf of the FBI, Senator Ron Wyden told Motherboard in a statement “Multiple intelligence community officials have confirmed to me, in writing, that intelligence agencies cannot ask foreign partners to conduct surveillance that the U.S. would be legally prohibited from doing itself. The FBI should follow this same standard. Allegations that the FBI outsourced warrantless surveillance of Americans to a foreign government raise troubling questions about the Justice Department’s oversight of these practices.”

I and others have long suspected that the NSA uses foreign nationals to get around restrictions that prevent it from spying on Americans. It is interesting to see the FBI using the same trick.

Stolen Bitcoins Returned

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/12/stolen-bitcoins-returned.html

The US has returned $154 million in bitcoins stolen by a Sony employee.

However, on December 1, following an investigation in collaboration with Japanese law enforcement authorities, the FBI seized the 3879.16242937 BTC in Ishii’s wallet after obtaining the private key, which made it possible to transfer all the bitcoins to the FBI’s bitcoin wallet.

Disrupting Ransomware by Disrupting Bitcoin

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/disrupting-ransomware-by-disrupting-bitcoin.html

Ransomware isn’t new; the idea dates back to 1986 with the “Brain” computer virus. Now, it’s become the criminal business model of the internet for two reasons. The first is the realization that no one values data more than its original owner, and it makes more sense to ransom it back to them — sometimes with the added extortion of threatening to make it public — than it does to sell it to anyone else. The second is a safe way of collecting ransoms: bitcoin.

This is where the suggestion to ban cryptocurrencies as a way to “solve” ransomware comes from. Lee Reiners, executive director of the Global Financial Markets Center at Duke Law, proposed this in a recent Wall Street Journal op-ed. Journalist Jacob Silverman made the same proposal in a New Republic essay. Without this payment channel, they write, the major ransomware epidemic is likely to vanish, since the only payment alternatives are suitcases full of cash or the banking system, both of which have severe limitations for criminal enterprises.

It’s the same problem kidnappers have had for centuries. The riskiest part of the operation is collecting the ransom. That’s when the criminal exposes themselves, by telling the payer where to leave the money. Or gives out their banking details. This is how law enforcement tracks kidnappers down and arrests them. The rise of an anonymous, global, distributed money-transfer system outside of any national control is what makes computer ransomware possible.

This problem is made worse by the nature of the criminals. They operate out of countries that don’t have the resources to prosecute cybercriminals, like Nigeria; or protect cybercriminals that only attack outside their borders, like Russia; or use the proceeds as a revenue stream, like North Korea. So even when a particular group is identified, it is often impossible to prosecute. Which leaves the only tools left a combination of successfully blocking attacks (another hard problem) and eliminating the payment channels that the criminals need to turn their attacks into profit.

In this light, banning cryptocurrencies like bitcoin is an obvious solution. But while the solution is conceptually simple, it’s also impossible because — despite its overwhelming problems — there are so many legitimate interests using cryptocurrencies, albeit largely for speculation and not for legal payments.

We suggest an easier alternative: merely disrupt the cryptocurrency markets. Making them harder to use will have the effect of making them less useful as a ransomware payment vehicle, and not just because victims will have more difficulty figuring out how to pay. The reason requires understanding how criminals collect their profits.

Paying a ransom starts with a victim turning a large sum of money into bitcoin and then transferring it to a criminal controlled “account.” Bitcoin is, in itself, useless to the criminal. You can’t actually buy much with bitcoin. It’s more like casino chips, only usable in a single establishment for a single purpose. (Yes, there are companies that “accept” bitcoin, but that is mostly a PR stunt.) A criminal needs to convert the bitcoin into some national currency that he can actually save, spend, invest, or whatever.

This is where it gets interesting. Conceptually, bitcoin combines numbered Swiss bank accounts with public transactions and balances. Anyone can create as many anonymous accounts as they want, but every transaction is posted publicly for the entire world to see. This creates some important challenges for these criminals.

First, the criminal needs to take efforts to conceal the bitcoin. In the old days, criminals used “https://www.justice.gov/opa/pr/individual-arrested-and-charged-operating-notorious-darknet-cryptocurrency-mixer”>mixing services“: third parties that would accept bitcoin into one account and then return it (minus a fee) from an unconnected set of accounts. Modern bitcoin tracing tools make this money laundering trick ineffective. Instead, the modern criminal does something called “chain swaps.”

In a chain swap, the criminal transfers the bitcoin to a shady offshore cryptocurrency exchange. These exchanges are notoriously weak about enforcing money laundering laws and — for the most part — don’t have access to the banking system. Once on this alternate exchange, the criminal sells his bitcoin and buys some other cryptocurrency like Ethereum, Dogecoin, Tether, Monero, or one of dozens of others. They then transfer it to another shady offshore exchange and transfer it back into bitcoin. Voila­ — they now have “clean” bitcoin.

Second, the criminal needs to convert that bitcoin into spendable money. They take their newly cleaned bitcoin and transfer it to yet another exchange, one connected to the banking system. Or perhaps they hire someone else to do this step. These exchanges conduct greater oversight of their customers, but the criminal can use a network of bogus accounts, recruit a bunch of users to act as mules, or simply bribe an employee at the exchange to evade whatever laws there. The end result of this activity is to turn the bitcoin into dollars, euros, or some other easily usable currency.

Both of these steps — the chain swapping and currency conversion — require a large amount of normal activity to keep from standing out. That is, they will be easy for law enforcement to identify unless they are hiding among lots of regular, noncriminal transactions. If speculators stopped buying and selling cryptocurrencies and the market shrunk drastically, these criminal activities would no longer be easy to conceal: there’s simply too much money involved.

This is why disruption will work. It doesn’t require an outright ban to stop these criminals from using bitcoin — just enough sand in the gears in the cryptocurrency space to reduce its size and scope.

How do we do this?

The first mechanism observes that the criminal’s flows have a unique pattern. The overall cryptocurrency space is “zero sum”: Every dollar made was provided by someone else. And the primary legal use of cryptocurrencies involves speculation: people effectively betting on a currency’s future value. So the background speculators are mostly balanced: One bitcoin in results in one bitcoin out. There are exceptions involving offshore exchanges and speculation among different cryptocurrencies, but they’re marginal, and only involve turning one bitcoin into a little more (if a speculator is lucky) or a little less (if unlucky).

Criminals and their victims act differently. Victims are net buyers, turning millions of dollars into bitcoin and never going the other way. Criminals are net sellers, only turning bitcoin into currency. The only other net sellers are the cryptocurrency miners, and they are easy to identify.

Any banked exchange that cares about enforcing money laundering laws must consider all significant net sellers of cryptocurrencies as potential criminals and report them to both in-country and US financial authorities. Any exchange that doesn’t should have its banking forcefully cut.

The US Treasury can ensure these exchanges are cut out of the banking system. By designating a rogue but banked exchange, the Treasury says that it is illegal not only to do business with the exchange but for US banks to do business with the exchange’s bank. As a consequence, the rogue exchange would quickly find its banking options eliminated.

A second mechanism involves the IRS. In 2019, it started demanding information from cryptocurrency exchanges and added a check box to the 1040 form that requires disclosure from those who both buy and sell cryptocurrencies. And while this is intended to target tax evasion, it has the side consequence of disrupting those offshore exchanges criminals rely to launder their bitcoin. Speculation on cryptocurrency is far less attractive since the speculators have to pay taxes but most exchanges don’t help out by filing 1099-Bs that make it easy to calculate the taxes owed.

A third mechanism involves targeting the cryptocurrency Tether. While most cryptocurrencies have values that fluctuate with demand, Tether is a “stablecoin” that is supposedly backed one-to-one with dollars. Of course, it probably isn’t, as its claim to be the seventh largest holder of commercial paper (short-term loans to major businesses) is blatantly untrue. Instead, they appear part of a cycle where new Tether is issued, used to buy cryptocurrencies, and the resulting cryptocurrencies now “back” Tether and drive up the price.

This behavior is clearly that of a “wildcat bank,” an 1800s fraudulent banking style that has long been illegal. Tether also bears a striking similarity to Liberty Reserve, an online currency that the Department of Justice successfully prosecuted for money laundering in 2013. Shutting down Tether would have the side effect of eliminating the value proposition for the exchanges that support chain swapping, since these exchanges need a “stable” value for the speculators to trade against.

There are further possibilities. One involves treating the cryptocurrency miners, those who validate all transactions and add them to the public record, as money transmitters — and subject to the regulations around that business. Another option involves requiring cryptocurrency exchanges to actually deliver the cryptocurrencies into customer-controlled wallets.

Effectively, all cryptocurrency exchanges avoid transferring cryptocurrencies between customers. Instead, they simply record entries in a central database. This makes sense because actual “on chain” transactions can be particularly expensive for cryptocurrencies like bitcoin or Ethereum. If all speculators needed to actually receive their bitcoins, it would make clear that its value proposition as a currency simply doesn’t exist, as the already strained system would grind to a halt.

And, of course, law enforcement can already target criminals’ bitcoin directly. An example of this just occurred, when US law enforcement was able to seize 85% of the $4 million ransom Colonial Pipeline paid to the criminal organization DarkSide. That by the time the seizure occurred the bitcoin lost more than 30% of its value is just one more reminder of how unworkable bitcoin is as a “store of value.”

There is no single silver bullet to disrupt either cryptocurrencies or ransomware. But enough little disruptions, a “death of a thousand cuts” through new and existing regulation, should make bitcoin no longer usable for ransomware. And if there’s no safe way for a criminal to collect the ransom, their business model becomes no longer viable.

This essay was written with Nicholas Weaver, and previously appeared on Slate.com.

Deliberately Playing Copyrighted Music to Avoid Being Live-Streamed

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/02/deliberately-playing-copyrighted-music-to-avoid-being-live-streamed.html

Vice is reporting on a new police hack: playing copyrighted music when being filmed by citizens, trying to provoke social media sites into taking the videos down and maybe even banning the filmers:

In a separate part of the video, which Devermont says was filmed later that same afternoon, Devermont approaches [BHPD Sgt. Billy] Fair outside. The interaction plays out almost exactly like it did in the department — when Devermont starts asking questions, Fair turns on the music.

Devermont backs away, and asks him to stop playing music. Fair says “I can’t hear you” — again, despite holding a phone that is blasting tunes.

Later, Fair starts berating Devermont’s livestreaming account, saying “I read the comments [on your account], they talk about how fake you are.” He then holds out his phone, which is still on full blast, and walks toward Devermont, saying “Listen to the music”.

In a statement emailed to VICE News, Beverly Hills PD said that “the playing of music while accepting a complaint or answering questions is not a procedure that has been recommended by Beverly Hills Police command staff,” and that the videos of Fair were “currently under review.”

However, this is not the first time that a Beverly Hills police officer has done this, nor is Fair the only one.

In an archived clip from a livestream shared privately to VICE Media that Devermont has not publicly reposted but he says was taken weeks ago, another officer can be seen quickly swiping through his phone as Devermont approaches. By the time Devermont is close enough to speak to him, the officer’s phone is already blasting “In My Life” by the Beatles — a group whose rightsholders have notoriously sued Apple numerous times. If you want to get someone in trouble for copyright infringement, the Beatles are quite possibly your best bet.

As Devermont asks about the music, the officer points the phone at him, asking, “Do you like it?”

Clever, really, and an illustration of the problem with context-free copyright enforcement.