All posts by Bruce Schneier

ISIS Encryption Opsec

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/03/isis_encryption.html

Tidbits from the New York Times:

The final phase of Mr. Hame’s training took place at an Internet cafe in Raqqa, where an Islamic State computer specialist handed him a USB key. It contained CCleaner, a program used to erase a user’s online history on a given computer, as well as TrueCrypt, an encryption program that was widely available at the time and that experts say has not yet been cracked.

[…]

More than a year and a half earlier, the would-be Cannes bomber, Ibrahim Boudina, had tried to erase the previous three days of his search history, according to details in his court record, but the police were still able to recover it. They found that Mr. Boudina had been researching how to connect to the Internet via a secure tunnel and how to change his I.P. address.

Though he may have been aware of the risk of discovery, perhaps he was not worried enough.

Mr. Boudina had been sloppy enough to keep using his Facebook account, and his voluminous chat history allowed French officials to determine his allegiance to the Islamic State. Wiretaps of his friends and relatives, later detailed in French court records obtained by The Times and confirmed by security officials, further outlined his plot, which officials believe was going to target the annual carnival on the French Riviera.

Mr. Hame, in contrast, was given strict instructions on how to communicate. After he used TrueCrypt, he was to upload the encrypted message folder onto a Turkish commercial data storage site, from where it would be downloaded by his handler in Syria. He was told not to send it by email, most likely to avoid generating the metadata that records details like the point of origin and destination, even if the content of the missive is illegible. Mr. Hame described the website as “basically a dead inbox.”

The ISIS technician told Mr. Hame one more thing: As soon as he made it back to Europe, he needed to buy a second USB key, and transfer the encryption program to it. USB keys are encoded with serial numbers, so the process was not unlike a robber switching getaway cars.

“He told me to copy what was on the key and then throw it away,” Mr. Hame explained. “That’s what I did when I reached Prague.”

Mr. Abaaoud was also fixated on cellphone security. He jotted down the number of a Turkish phone that he said would be left in a building in Syria, but close enough to the border to catch the Turkish cell network, according to Mr. Hame’s account. Mr. Abaaoud apparently figured investigators would be more likely to track calls from Europe to Syrian phone numbers, and might overlook calls to a Turkish one.

Next to the number, Mr. Abaaoud scribbled “Dad.”

This seems like exactly the sort of opsec I would set up for an insurgent group.

EDITED TO ADD: Mistakes in the article. For example:

And now I’ve read one of the original French documents and confirmed my suspicion that the NYTimes article got details wrong.

The original French uses the word “boîte”, which matches the TrueCrypt term “container”. The original French didn’t use the words “fichier” (file), “dossier” (folder), or “répertoire” (directory). This makes so much more sense, and gives us more confidence we know what they were doing.

The original French uses the term “site de partage”, meaning a “sharing site”, which makes more sense than a “storage” site.

The document I saw says the slip of paper had login details for the file sharing site, not a TrueCrypt password. Thus, when the NYTimes article says “TrueCrypt login credentials”, we should correct it to “file sharing site login credentials”, not “TrueCrypt passphrase”.

MOST importantly, according the subject, the login details didn’t even work. It appears he never actually used this method — he was just taught how to use it. He no longer remembers the site’s name, other than it might have the word “share” in its name. We see this a lot: ISIS talks a lot about encryption, but the evidence of them actually using it is scant.

Lawful Hacking and Continuing Vulnerabilities

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/03/lawful_hacking_.html

The FBI’s legal battle with Apple is over, but the way it ended may not be good news for anyone.

Federal agents had been seeking to compel Apple to break the security of an iPhone 5c that had been used by one of the San Bernardino, Calif., terrorists. Apple had been fighting a court order to cooperate with the FBI, arguing that the authorities’ request was illegal and that creating a tool to break into the phone was itself harmful to the security of every iPhone user worldwide.

Last week, the FBI told the court it had learned of a possible way to break into the phone using a third party’s solution, without Apple’s help. On Monday, the agency dropped the case because the method worked. We don’t know who that third party is. We don’t know what the method is, or which iPhone models it applies to. Now it seems like we never will.

The FBI plans to classify this access method and to use it to break into other phones in other criminal investigations.

Compare this iPhone vulnerability with another, one that was made public on the same day the FBI said it might have found its own way into the San Bernardino phone. Researchers at Johns Hopkins University announced last week that they had found a significant vulnerability in the iMessage protocol. They disclosed the vulnerability to Apple in the fall, and last Monday, Apple released an updated version of its operating system that fixed the vulnerability. (That’s iOS 9.3­you should download and install it right now.) The Hopkins team didn’t publish its findings until Apple’s patch was available, so devices could be updated to protect them from attacks using the researchers’ discovery.

This is how vulnerability research is supposed to work.

Vulnerabilities are found, fixed, then published. The entire security community is able to learn from the research, and­ — more important­ — everyone is more secure as a result of the work.

The FBI is doing the exact opposite. It has been given whatever vulnerability it used to get into the San Bernardino phone in secret, and it is keeping it secret. All of our iPhones remain vulnerable to this exploit. This includes the iPhones used by elected officials and federal workers and the phones used by people who protect our nation’s critical infrastructure and carry out other law enforcement duties, including lots of FBI agents.

This is the trade-off we have to consider: Do we prioritize security over surveillance, or do we sacrifice security for surveillance?

The problem with computer vulnerabilities is that they’re general. There’s no such thing as a vulnerability that affects only one device. If it affects one copy of an application, operating system or piece of hardware, then it affects all identical copies. A vulnerability in Windows 10, for example, affects all of us who use Windows 10. And it can be used by anyone who knows it, be they the FBI, a gang of cyber criminals, the intelligence agency of another country — anyone.

And once a vulnerability is found, it can be used for attack­ — like the FBI is doing — or for defense, as in the Johns Hopkins example.

Over years of battling attackers and intruders, we’ve learned a lot about computer vulnerabilities. They’re plentiful: Vulnerabilities are found and fixed in major systems all the time. They’re regularly discovered independently, by outsiders rather than by the original manufacturers or programmers. And once they’re discovered, word gets out. Today’s top-secret National Security Agency attack techniques become tomorrow’s PhD theses and the next day’s hacker tools.

The attack/defense trade-off is not new to the US government. They even have a process for deciding what to do when a vulnerability is discovered: whether they should be disclosed to improve all of our security, or kept secret to be used for offense. The White House claims that it prioritizes defense, and that general vulnerabilities in widely used computer systems are patched.

Whatever method the FBI used to get into the San Bernardino shooter’s iPhone is one such vulnerability. The FBI did the right thing by using an existing vulnerability rather than forcing Apple to create a new one, but it should be disclosed to Apple and patched immediately.

This case has always been more about the PR battle and potential legal precedent than about the particular phone. And while the legal dispute is over, there are other cases involving other encrypted devices in other courts across the country. But while there will always be a few computers­ — corporate servers, individual laptops or personal smartphones — ­that the FBI would like to break into, there are a far more such devices that we need to be secure.

One of the most surprising things about this debate is the number of former national security officials who came out on Apple’s side. They understand that we are singularly vulnerable to cyberattack, and that our cyberdefense needs to be as strong as possible.

The FBI’s myopic focus on this one investigation is understandable, but in the long run, it’s damaging to our national security.

This essay previously appeared in the Washington Post, with a far too click-bait headline.

EDITED TO ADD: To be fair, the FBI probably doesn’t know what the vulnerability is. And I wonder how easy it would be for Apple to figure it out. Given that the FBI has to exhaust all avenues of access before demanding help from Apple, we can learn which models are vulnerable by watching which legal suits are abandoned now that the FBI knows about this method.

Matt Blaze makes excellent points about how the FBI should disclose the vulnerabilities it uses, in order to improve computer security. That was part of a New York Times “Room for Debate” on hackers helping the FBI.

Susan Landau’s excellent Congressional testimony on the topic.

Lawful Hacking and Continuing Vulnerabilities

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/03/lawful_hacking_.html

The FBI’s legal battle with Apple is over, but the way it ended may not be good news for anyone.

Federal agents had been seeking to compel Apple to break the security of an iPhone 5c that had been used by one of the San Bernardino, Calif., terrorists. Apple had been fighting a court order to cooperate with the FBI, arguing that the authorities’ request was illegal and that creating a tool to break into the phone was itself harmful to the security of every iPhone user worldwide.

Last week, the FBI told the court it had learned of a possible way to break into the phone using a third party’s solution, without Apple’s help. On Monday, the agency dropped the case because the method worked. We don’t know who that third party is. We don’t know what the method is, or which iPhone models it applies to. Now it seems like we never will.

The FBI plans to classify this access method and to use it to break into other phones in other criminal investigations.

Compare this iPhone vulnerability with another, one that was made public on the same day the FBI said it might have found its own way into the San Bernardino phone. Researchers at Johns Hopkins University announced last week that they had found a significant vulnerability in the iMessage protocol. They disclosed the vulnerability to Apple in the fall, and last Monday, Apple released an updated version of its operating system that fixed the vulnerability. (That’s iOS 9.3­you should download and install it right now.) The Hopkins team didn’t publish its findings until Apple’s patch was available, so devices could be updated to protect them from attacks using the researchers’ discovery.

This is how vulnerability research is supposed to work.

Vulnerabilities are found, fixed, then published. The entire security community is able to learn from the research, and­ — more important­ — everyone is more secure as a result of the work.

The FBI is doing the exact opposite. It has been given whatever vulnerability it used to get into the San Bernardino phone in secret, and it is keeping it secret. All of our iPhones remain vulnerable to this exploit. This includes the iPhones used by elected officials and federal workers and the phones used by people who protect our nation’s critical infrastructure and carry out other law enforcement duties, including lots of FBI agents.

This is the trade-off we have to consider: Do we prioritize security over surveillance, or do we sacrifice security for surveillance?

The problem with computer vulnerabilities is that they’re general. There’s no such thing as a vulnerability that affects only one device. If it affects one copy of an application, operating system or piece of hardware, then it affects all identical copies. A vulnerability in Windows 10, for example, affects all of us who use Windows 10. And it can be used by anyone who knows it, be they the FBI, a gang of cyber criminals, the intelligence agency of another country — anyone.

And once a vulnerability is found, it can be used for attack­ — like the FBI is doing — or for defense, as in the Johns Hopkins example.

Over years of battling attackers and intruders, we’ve learned a lot about computer vulnerabilities. They’re plentiful: Vulnerabilities are found and fixed in major systems all the time. They’re regularly discovered independently, by outsiders rather than by the original manufacturers or programmers. And once they’re discovered, word gets out. Today’s top-secret National Security Agency attack techniques become tomorrow’s PhD theses and the next day’s hacker tools.

The attack/defense trade-off is not new to the US government. They even have a process for deciding what to do when a vulnerability is discovered: whether they should be disclosed to improve all of our security, or kept secret to be used for offense. The White House claims that it prioritizes defense, and that general vulnerabilities in widely used computer systems are patched.

Whatever method the FBI used to get into the San Bernardino shooter’s iPhone is one such vulnerability. The FBI did the right thing by using an existing vulnerability rather than forcing Apple to create a new one, but it should be disclosed to Apple and patched immediately.

This case has always been more about the PR battle and potential legal precedent than about the particular phone. And while the legal dispute is over, there are other cases involving other encrypted devices in other courts across the country. But while there will always be a few computers­ — corporate servers, individual laptops or personal smartphones — ­that the FBI would like to break into, there are a far more such devices that we need to be secure.

One of the most surprising things about this debate is the number of former national security officials who came out on Apple’s side. They understand that we are singularly vulnerable to cyberattack, and that our cyberdefense needs to be as strong as possible.

The FBI’s myopic focus on this one investigation is understandable, but in the long run, it’s damaging to our national security.

This essay previously appeared in the Washington Post, with a far too click-bait headline.

EDITED TO ADD: To be fair, the FBI probably doesn’t know what the vulnerability is. And I wonder how easy it would be for Apple to figure it out. Given that the FBI has to exhaust all avenues of access before demanding help from Apple, we can learn which models are vulnerable by watching which legal suits are abandoned now that the FBI knows about this method.

Matt Blaze makes excellent points about how the FBI should disclose the vulnerabilities it uses, in order to improve computer security. That was part of a New York Times “Room for Debate” on hackers helping the FBI.

Susan Landau’s excellent Congressional testimony on the topic.

Mass Surveillance Silences Minority Opinions

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/03/mass_surveillan_1.html

Research paper: Elizabeth Stoycheff, “Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the Wake of NSA Internet Monitoring“:

Abstract: Since Edward Snowden exposed the National Security Agency’s use of controversial online surveillance programs in 2013, there has been widespread speculation about the potentially deleterious effects of online government monitoring. This study explores how perceptions and justification of surveillance practices may create a chilling effect on democratic discourse by stifling the expression of minority political views. Using a spiral of silence theoretical framework, knowing one is subject to surveillance and accepting such surveillance as necessary act as moderating agents in the relationship between one’s perceived climate of opinion and willingness to voice opinions online. Theoretical and normative implications are discussed.

No surprise, and something I wrote about in Data and Goliath:

Across the US, states are on the verge of reversing decades-old laws about homosexual relationships and marijuana use. If the old laws could have been perfectly enforced through surveillance, society would never have reached the point where the majority of citizens thought those things were okay. There has to be a period where they are still illegal yet increasingly tolerated, so that people can look around and say, “You know, that wasn’t so bad.” Yes, the process takes decades, but it’s a process that can’t happen without lawbreaking. Frank Zappa said something similar in 1971: “Without deviation from the norm, progress is not possible.”

The perfect enforcement that comes with ubiquitous government surveillance chills this process. We need imperfect security­ — systems that free people to try new things, much the way off-the-record brainstorming sessions loosen inhibitions and foster creativity. If we don’t have that, we can’t slowly move from a thing’s being illegal and not okay, to illegal and not sure, to illegal and probably okay, and finally to legal.

This is an important point. Freedoms we now take for granted were often at one time viewed as threatening or even criminal by the past power structure. Those changes might never have happened if the authorities had been able to achieve social control through surveillance.

This is one of the main reasons all of us should care about the emerging architecture of surveillance, even if we are not personally chilled by its existence. We suffer the effects because people around us will be less likely to proclaim new political or social ideas, or act out of the ordinary. If J. Edgar Hoover’s surveillance of Martin Luther King Jr. had been successful in silencing him, it would have affected far more people than King and his family.

Slashdot thread.

Mass Surveillance Silences Minority Opinions

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/03/mass_surveillan_1.html

Research paper: Elizabeth Stoycheff, “Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the Wake of NSA Internet Monitoring“:

Abstract: Since Edward Snowden exposed the National Security Agency’s use of controversial online surveillance programs in 2013, there has been widespread speculation about the potentially deleterious effects of online government monitoring. This study explores how perceptions and justification of surveillance practices may create a chilling effect on democratic discourse by stifling the expression of minority political views. Using a spiral of silence theoretical framework, knowing one is subject to surveillance and accepting such surveillance as necessary act as moderating agents in the relationship between one’s perceived climate of opinion and willingness to voice opinions online. Theoretical and normative implications are discussed.

No surprise, and something I wrote about in Data and Goliath:

Across the US, states are on the verge of reversing decades-old laws about homosexual relationships and marijuana use. If the old laws could have been perfectly enforced through surveillance, society would never have reached the point where the majority of citizens thought those things were okay. There has to be a period where they are still illegal yet increasingly tolerated, so that people can look around and say, “You know, that wasn’t so bad.” Yes, the process takes decades, but it’s a process that can’t happen without lawbreaking. Frank Zappa said something similar in 1971: “Without deviation from the norm, progress is not possible.”

The perfect enforcement that comes with ubiquitous government surveillance chills this process. We need imperfect security­ — systems that free people to try new things, much the way off-the-record brainstorming sessions loosen inhibitions and foster creativity. If we don’t have that, we can’t slowly move from a thing’s being illegal and not okay, to illegal and not sure, to illegal and probably okay, and finally to legal.

This is an important point. Freedoms we now take for granted were often at one time viewed as threatening or even criminal by the past power structure. Those changes might never have happened if the authorities had been able to achieve social control through surveillance.

This is one of the main reasons all of us should care about the emerging architecture of surveillance, even if we are not personally chilled by its existence. We suffer the effects because people around us will be less likely to proclaim new political or social ideas, or act out of the ordinary. If J. Edgar Hoover’s surveillance of Martin Luther King Jr. had been successful in silencing him, it would have affected far more people than King and his family.

Slashdot thread.

EDITED TO ADD (4/6): News article.

Power on the Internet

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/03/power_on_the_in.html

Interesting paper: Yochai Benkler, “Degrees of Freedom, Dimensions of Power,” Daedelus, winter 2016:

Abstract: The original Internet design combined technical, organizational, and cultural characteristics that decentralized power along diverse dimensions. Decentralized institutional, technical, and market power maximized freedom to operate and innovate at the expense of control. Market developments have introduced new points of control. Mobile and cloud computing, the Internet of Things, fiber transition, big data, surveillance, and behavioral marketing introduce new control points and dimensions of power into the Internet as a social-cultural-economic platform. Unlike in the Internet’s first generation, companies and governments are well aware of the significance of design choices, and are jostling to acquire power over, and appropriate value from, networked activity. If we are to preserve the democratic and creative promise of the Internet, we must continuously diagnose control points as they emerge and devise mechanisms of recreating diversity of constraint and degrees of freedom in the network to work around these forms of reconcentrated power.