Tag Archives: courts

Risks of Evidentiary Software

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/06/risks-of-evidentiary-software.html

Over at Lawfare, Susan Landau has an excellent essay on the risks posed by software used to collect evidence (a Breathalyzer is probably the most obvious example).

Bugs and vulnerabilities can lead to inaccurate evidence, but the proprietary nature of software makes it hard for defendants to examine it.

The software engineers proposed a three-part test. First, the court should have access to the “Known Error Log,” which should be part of any professionally developed software project. Next the court should consider whether the evidence being presented could be materially affected by a software error. Ladkin and his co-authors noted that a chain of emails back and forth are unlikely to have such an error, but the time that a software tool logs when an application was used could easily be incorrect. Finally, the reliability experts recommended seeing whether the code adheres to an industry standard used in an non-computerized version of the task (e.g., bookkeepers always record every transaction, and thus so should bookkeeping software).

[…]

Inanimate objects have long served as evidence in courts of law: the door handle with a fingerprint, the glove found at a murder scene, the Breathalyzer result that shows a blood alcohol level three times the legal limit. But the last of those examples is substantively different from the other two. Data from a Breathalyzer is not the physical entity itself, but rather a software calculation of the level of alcohol in the breath of a potentially drunk driver. As long as the breath sample has been preserved, one can always go back and retest it on a different device.

What happens if the software makes an error and there is no sample to check or if the software itself produces the evidence? At the time of our writing the article on the use of software as evidence, there was no overriding requirement that law enforcement provide a defendant with the code so that they might examine it themselves.

[…]

Given the high rate of bugs in complex software systems, my colleagues and I concluded that when computer programs produce the evidence, courts cannot assume that the evidentiary software is reliable. Instead the prosecution must make the code available for an “adversarial audit” by the defendant’s experts. And to avoid problems in which the government doesn’t have the code, government procurement contracts must include delivery of source code­ — code that is more-or-less readable by people — ­for every version of the code or device.

The Legal Risks of Security Research

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/the-legal-risks-of-security-research.html

Sunoo Park and Kendra Albert have published “A Researcher’s Guide to Some Legal Risks of Security Research.”

From a summary:

Such risk extends beyond anti-hacking laws, implicating copyright law and anti-circumvention provisions (DMCA §1201), electronic privacy law (ECPA), and cryptography export controls, as well as broader legal areas such as contract and trade secret law.

Our Guide gives the most comprehensive presentation to date of this landscape of legal risks, with an eye to both legal and technical nuance. Aimed at researchers, the public, and technology lawyers alike, its aims both to provide pragmatic guidance to those navigating today’s uncertain legal landscape, and to provoke public debate towards future reform.

Comprehensive, and well worth reading.

Here’s a Twitter thread by Kendra.

Reverse-Engineering the Redactions in the Ghislaine Maxwell Deposition

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/reverse-engineering-the-redactions-in-the-ghislaine-maxwell-deposition.html

Slate magazine was able to cleverly read the Ghislaine Maxwell deposition and reverse-engineer many of the redacted names.

We’ve long known that redacting is hard in the modern age, but most of the failures to date have been a result of not realizing that covering digital text with a black bar doesn’t always remove the text from the underlying digital file. As far as I know, this reverse-engineering technique is new.

EDITED TO ADD: A similar technique was used in 1991 to recover the Dead Sea Scrolls.

Adversarial Machine Learning and the CFAA

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/adversarial_mac_1.html

I just co-authored a paper on the legal risks of doing machine learning research, given the current state of the Computer Fraud and Abuse Act:

Abstract: Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask, “What are the potential legal risks to adversarial ML researchers when they attack ML systems?” Studying or testing the security of any operational system potentially runs afoul the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. We claim that Adversarial ML research is likely no different. Our analysis show that because there is a split in how CFAA is interpreted, aspects of adversarial ML attacks, such as model inversion, membership inference, model stealing, reprogramming the ML system and poisoning attacks, may be sanctioned in some jurisdictions and not penalized in others. We conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA’s application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.

Medium post on the paper. News article, which uses our graphic without attribution.

How Did Facebook Beat a Federal Wiretap Demand?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/how_did_faceboo.html

This is interesting:

Facebook Inc. in 2018 beat back federal prosecutors seeking to wiretap its encrypted Messenger app. Now the American Civil Liberties Union is seeking to find out how.

The entire proceeding was confidential, with only the result leaking to the press. Lawyers for the ACLU and the Washington Post on Tuesday asked a San Francisco-based federal court of appeals to unseal the judge’s decision, arguing the public has a right to know how the law is being applied, particularly in the area of privacy.

[…]

The Facebook case stems from a federal investigation of members of the violent MS-13 criminal gang. Prosecutors tried to hold Facebook in contempt after the company refused to help investigators wiretap its Messenger app, but the judge ruled against them. If the decision is unsealed, other tech companies will likely try to use its reasoning to ward off similar government requests in the future.

Here’s the 2018 story. Slashdot thread.

Securing the Internet of Things through Class-Action Lawsuits

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/securing_the_in.html

This law journal article discusses the role of class-action litigation to secure the Internet of Things.

Basically, the article postulates that (1) market realities will produce insecure IoT devices, and (2) political failures will leave that industry unregulated. Result: insecure IoT. It proposes proactive class action litigation against manufacturers of unsafe and unsecured IoT devices before those devices cause unnecessary injury or death. It’s a lot to read, but it’s an interesting take on how to secure this otherwise disastrously insecure world.

And it was inspired by my book, Click Here to Kill Everybody.

The Story of Tiversa

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/12/the_story_of_ti.html

The New Yorker has published the long and interesting story of the cybersecurity firm Tiversa.

Watching “60 Minutes,” Boback saw a remarkable new business angle. Here was a multibillion-dollar industry with a near-existential problem and no clear solution. He did not know it then, but, as he turned the opportunity over in his mind, he was setting in motion a sequence of events that would earn him millions of dollars, friendships with business élites, prime-time media attention, and respect in Congress. It would also place him at the center of one of the strangest stories in the brief history of cybersecurity; he would be mired in lawsuits, countersuits, and counter-countersuits, which would gather into a vortex of litigation so ominous that one friend compared it to the Bermuda Triangle. He would be accused of fraud, of extortion, and of manipulating the federal government into harming companies that did not do business with him. Congress would investigate him. So would the F.B.I.

AT&T Employees Took Bribes to Unlock Smartphones

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/att_employees_t.html

This wasn’t a small operation:

A Pakistani man bribed AT&T call-center employees to install malware and unauthorized hardware as part of a scheme to fraudulently unlock cell phones, according to the US Department of Justice. Muhammad Fahd, 34, was extradited from Hong Kong to the US on Friday and is being detained pending trial.

An indictment alleges that “Fahd recruited and paid AT&T insiders to use their computer credentials and access to disable AT&T’s proprietary locking software that prevented ineligible phones from being removed from AT&T’s network,” a DOJ announcement yesterday said. “The scheme resulted in millions of phones being removed from AT&T service and/or payment plans, costing the company millions of dollars. Fahd allegedly paid the insiders hundreds of thousands of dollars­ — paying one co-conspirator $428,500 over the five-year scheme.”

In all, AT&T insiders received more than $1 million in bribes from Fahd and his co-conspirators, who fraudulently unlocked more than 2 million cell phones, the government alleged. Three former AT&T customer service reps from a call center in Bothell, Washington, already pleaded guilty and agreed to pay the money back to AT&T.

How Privacy Laws Hurt Defendants

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/how_privacy_law.html

Rebecca Wexler has an interesting op-ed about an inadvertent harm that privacy laws can cause: while law enforcement can often access third-party data to aid in prosecution, the accused don’t have the same level of access to aid in their defense:

The proposed privacy laws would make this situation worse. Lawmakers may not have set out to make the criminal process even more unfair, but the unjust result is not surprising. When lawmakers propose privacy bills to protect sensitive information, law enforcement agencies lobby for exceptions so they can continue to access the information. Few lobby for the accused to have similar rights. Just as the privacy interests of poor, minority and heavily policed communities are often ignored in the lawmaking process, so too are the interests of criminal defendants, many from those same communities.

In criminal cases, both the prosecution and the accused have a right to subpoena evidence so that juries can hear both sides of the case. The new privacy bills need to ensure that law enforcement and defense investigators operate under the same rules when they subpoena digital data. If lawmakers believe otherwise, they should have to explain and justify that view.

For more detail, see her paper.

The Importance of Protecting Cybersecurity Whistleblowers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/the_importance_3.html

Interesting essay arguing that we need better legislation to protect cybersecurity whistleblowers.

Congress should act to protect cybersecurity whistleblowers because information security has never been so important, or so challenging. In the wake of a barrage of shocking revelations about data breaches and companies mishandling of customer data, a bipartisan consensus has emerged in support of legislation to give consumers more control over their personal information, require companies to disclose how they collect and use consumer data, and impose penalties for data breaches and misuse of consumer data. The Federal Trade Commission (“FTC”) has been held out as the best agency to implement this new regulation. But for any such legislation to be effective, it must protect the courageous whistleblowers who risk their careers to expose data breaches and unauthorized use of consumers’ private data.

Whistleblowers strengthen regulatory regimes, and cybersecurity regulation would be no exception. Republican and Democratic leaders from the executive and legislative branches have extolled the virtues of whistleblowers. High-profile cases abound. Recently, Christopher Wylie exposed Cambridge Analytica’s misuse of Facebook user data to manipulate voters, including its apparent theft of data from 50 million Facebook users as part of a psychological profiling campaign. Though additional research is needed, the existing empirical data reinforces the consensus that whistleblowers help prevent, detect, and remedy misconduct. Therefore it is reasonable to conclude that protecting and incentivizing whistleblowers could help the government address the many complex challenges facing our nation’s information systems.

I Was Cited in a Court Decision

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/i_was_cited_in_.html

An article I co-wrote — my first law journal article — was cited by the Massachusetts Supreme Judicial Court — the state supreme court — in a case on compelled decryption.

Here’s the first, in footnote 1:

We understand the word “password” to be synonymous with other terms that cell phone users may be familiar with, such as Personal Identification Number or “passcode.” Each term refers to the personalized combination of letters or digits that, when manually entered by the user, “unlocks” a cell phone. For simplicity, we use “password” throughout. See generally, Kerr & Schneier, Encryption Workarounds, 106 Geo. L.J. 989, 990, 994, 998 (2018).

And here’s the second, in footnote 5:

We recognize that ordinary cell phone users are likely unfamiliar with the complexities of encryption technology. For instance, although entering a password “unlocks” a cell phone, the password itself is not the “encryption key” that decrypts the cell phone’s contents. See Kerr & Schneier, supra at 995. Rather, “entering the [password] decrypts the [encryption] key, enabling the key to be processed and unlocking the phone. This two-stage process is invisible to the casual user.” Id. Because the technical details of encryption technology do not play a role in our analysis, they are not worth belaboring. Accordingly, we treat the entry of a password as effectively decrypting the contents of a cell phone. For a more detailed discussion of encryption technology, see generally Kerr & Schneier, supra.

Reverse Location Search Warrants

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/02/reverse_locatio.html

The police are increasingly getting search warrants for information about all cell phones in a certain location at a certain time:

Police departments across the country have been knocking at Google’s door for at least the last two years with warrants to tap into the company’s extensive stores of cellphone location data. Known as “reverse location search warrants,” these legal mandates allow law enforcement to sweep up the coordinates and movements of every cellphone in a broad area. The police can then check to see if any of the phones came close to the crime scene. In doing so, however, the police can end up not only fishing for a suspect, but also gathering the location data of potentially hundreds (or thousands) of innocent people. There have only been anecdotal reports of reverse location searches, so it’s unclear how widespread the practice is, but privacy advocates worry that Google’s data will eventually allow more and more departments to conduct indiscriminate searches.

Of course, it’s not just Google who can provide this information.

I am also reminded of a Canadian surveillance program disclosed by Snowden.

I spend a lot of time talking about this sort of thing in Data and Goliath. Once you have everyone under surveillance all the time, many things are possible.

El Chapo’s Encryption Defeated by Turning His IT Consultant

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/el_chapos_encry.html

Impressive police work:

In a daring move that placed his life in danger, the I.T. consultant eventually gave the F.B.I. his system’s secret encryption keys in 2011 after he had moved the network’s servers from Canada to the Netherlands during what he told the cartel’s leaders was a routine upgrade.

A Dutch article says that it’s a BlackBerry system.

El Chapo had his IT person install “…spyware called FlexiSPY on the ‘special phones’ he had given to his wife, Emma Coronel Aispuro, as well as to two of his lovers, including one who was a former Mexican lawmaker.” That same software was used by the FBI when his IT person turned over the keys. Yet again we learn the lesson that a backdoor can be used against you.

And it doesn’t have to be with the IT person’s permission. A good intelligence agency can use the IT person’s authorizations without his knowledge or consent. This is why the NSA hunts sysadmins.

Slashdot thread. Hacker News thread. Boing Boing post.

SpiderOak’s Warrant Canary Died

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/08/spideroaks_warr.html

BoingBoing has the story.

I have never quite trusted the idea of a warrant canary. But here it seems to have worked. (Presumably, if SpiderOak wanted to replace the warrant canary with a transparency report, they would have written something explaining their decision. To have it simply disappear is what we would expect if SpiderOak were being forced to comply with a US government request for personal data.)

EDITED TO ADD (8/9): SpiderOak has posted an explanation claiming that the warrant canary did not die — it just changed.

That’s obviously false, because it did die. And a change is the functional equivalent — that’s how they work. So either they have received a National Security Letter and now have to pretend they did not, or they completely misunderstood what a warrant canary is and how it works. No one knows.

I have never fully trusted warrant canaries — this EFF post explains why — and this is an illustration.

Suing South Carolina Because Its Election Machines Are Insecure

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/07/suing_south_car.html

A group called Protect Democracy is suing South Carolina because its insecure voting machines are effectively denying people the right to vote.

Note: I am an advisor to Protect Democracy on its work related to election cybersecurity, and submitted a declaration in litigation it filed, challenging President Trump’s now-defunct “election integrity” commission.

E-Mail Leaves an Evidence Trail

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/e-mail_leaves_a.html

If you’re going to commit an illegal act, it’s best not to discuss it in e-mail. It’s also best to Google tech instructions rather than asking someone else to do it:

One new detail from the indictment, however, points to just how unsophisticated Manafort seems to have been. Here’s the relevant passage from the indictment. I’ve bolded the most important bits:

Manafort and Gates made numerous false and fraudulent representations to secure the loans. For example, Manafort provided the bank with doctored [profit and loss statements] for [Davis Manafort Inc.] for both 2015 and 2016, overstating its income by millions of dollars. The doctored 2015 DMI P&L submitted to Lender D was the same false statement previously submitted to Lender C, which overstated DMI’s income by more than $4 million. The doctored 2016 DMI P&L was inflated by Manafort by more than $3.5 million. To create the false 2016 P&L, on or about October 21, 2016, Manafort emailed Gates a .pdf version of the real 2016 DMI P&L, which showed a loss of more than $600,000. Gates converted that .pdf into a “Word” document so that it could be edited, which Gates sent back to Manafort. Manafort altered that “Word” document by adding more than $3.5 million in income. He then sent this falsified P&L to Gates and asked that the “Word” document be converted back to a .pdf, which Gates did and returned to Manafort. Manafort then sent the falsified 2016 DMI P&L .pdf to Lender D.

So here’s the essence of what went wrong for Manafort and Gates, according to Mueller’s investigation: Manafort allegedly wanted to falsify his company’s income, but he couldn’t figure out how to edit the PDF. He therefore had Gates turn it into a Microsoft Word document for him, which led the two to bounce the documents back-and-forth over email. As attorney and blogger Susan Simpson notes on Twitter, Manafort’s inability to complete a basic task on his own seems to have effectively “created an incriminating paper trail.”

If there’s a lesson here, it’s that the Internet constantly generates data about what people are doing on it, and that data is all potential evidence. The FBI is 100% wrong that they’re going dark; it’s really the golden age of surveillance, and the FBI’s panic is really just its own lack of technical sophistication.

Blame privacy activists for the Memo??

Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/02/blame-privacy-activists-for-memo.html

Former FBI agent Asha Rangappa @AshaRangappa_ has a smart post debunking the Nunes Memo, then takes it all back again with an op-ed on the NYTimes blaming us privacy activists. She presents an obviously false narrative that the FBI and FISA courts are above suspicion.

I know from first hand experience the FBI is corrupt. In 2007, they threatened me, trying to get me to cancel a talk that revealed security vulnerabilities in a large corporation’s product. Such abuses occur because there is no transparency and oversight. FBI agents write down our conversation in their little notebooks instead of recording it, so that they can control the narrative of what happened, presenting their version of the converstion (leaving out the threats). In this day and age of recording devices, this is indefensible.

She writes “I know firsthand that it’s difficult to get a FISA warrant“. Yes, the process was difficult for her, an underling, to get a FISA warrant. The process is different when a leader tries to do the same thing.

I know this first hand having casually worked as an outsider with intelligence agencies. I saw two processes in place: one for the flunkies, and one for those above the system. The flunkies constantly complained about how there is too many process in place oppressing them, preventing them from getting their jobs done. The leaders understood the system and how to sidestep those processes.

That’s not to say the Nunes Memo has merit, but it does point out that privacy advocates have a point in wanting more oversight and transparency in such surveillance of American citizens.

Blaming us privacy advocates isn’t the way to go. It’s not going to succeed in tarnishing us, but will push us more into Trump’s camp, causing us to reiterate that we believe the FBI and FISA are corrupt.