All posts by Bruce Schneier

Economist Detained for Doing Math on an Airplane

Post Syndicated from Bruce Schneier original

An economics professor was detained when he was spotted doing math on an airplane:

On Thursday evening, a 40-year-old man ­– with dark, curly hair, olive skin and an exotic foreign accent –­ boarded a plane. It was a regional jet making a short, uneventful hop from Philadelphia to nearby Syracuse.

Or so dozens of unsuspecting passengers thought.

The curly-haired man tried to keep to himself, intently if inscrutably scribbling on a notepad he’d brought aboard. His seatmate, a blond-haired, 30-something woman sporting flip-flops and a red tote bag, looked him over. He was wearing navy Diesel jeans and a red Lacoste sweater — a look he would later describe as “simple elegance” — but something about him didn’t seem right to her.

She decided to try out some small talk.

Is Syracuse home? She asked.

No, he replied curtly.

He similarly deflected further questions. He appeared laser-focused ­– perhaps too laser-focused ­– on the task at hand, those strange scribblings.

Rebuffed, the woman began reading her book. Or pretending to read, anyway. Shortly after boarding had finished, she flagged down a flight attendant and handed that crew-member a note of her own.

This story ended better than some. Economics professor Guido Menzio (yes, he’s Italian) was taken off the plane, questioned, cleared, and allowed to board with the rest of his passengers two hours later.

This is a result of our stupid “see something, say something” culture. As I repeatedly say: “If you ask amateurs to act as front-line security personnel, you shouldn’t be surprised when you get amateur security.”

On the other hand, “Algebra, of course, does have Arabic origins plus math is used to make bombs.” Plus, this fine joke from 2003:

At Heathrow Airport today, an individual, later discovered to be a school teacher, was arrested trying to board a flight while in possession of a compass, a protractor, and a graphical calculator.

Authorities believe she is a member of the notorious al-Gebra movement. She is being charged with carrying weapons of math instruction.

AP story. Slashdot thread.

Seriously, though, I worry that this kind of thing will happen to me. I’m older, and I’m not very Semitic looking, but I am curt to my seatmates and intently focused on what I am doing — which sometimes involves looking at web pages about, and writing about, security and terrorism. I’m sure I’m vaguely suspicious.

EDITED TO ADD: Last month a student was removed from an airplane for speaking Arabic.

NIST Starts Planning for Post-Quantum Cryptography

Post Syndicated from Bruce Schneier original

Last year, the NSA announced its plans for transitioning to cryptography that is resistant to a quantum computer. Now, it’s NIST’s turn. Its just-released report talks about the importance of algorithm agility and quantum resistance. Sometime soon, it’s going to have a competition for quantum-resistant public-key algorithms:

Creating those newer, safer algorithms is the longer-term goal, Moody says. A key part of this effort will be an open collaboration with the public, which will be invited to devise and vet cryptographic methods that — to the best of experts’ knowledge — ­will be resistant to quantum attack. NIST plans to launch this collaboration formally sometime in the next few months, but in general, Moody says it will resemble past competitions such as the one for developing the SHA-3 hash algorithm, used in part for authenticating digital messages.

“It will be a long process involving public vetting of quantum-resistant algorithms,” Moody said. “And we’re not expecting to have just one winner. There are several systems in use that could be broken by a quantum computer­ — public-key encryption and digital signatures, to take two examples­ — and we will need different solutions for each of those systems.”

The report rightly states that we’re okay in the symmetric cryptography world; the key lengths are long enough.

This is an excellent development. NIST has done an excellent job with their previous cryptographic standards, giving us a couple of good, strong, well-reviewed, and patent-free algorithms. I have no doubt this process will be equally excellent. (If NIST is keeping a list, aside from post-quantum public-key algorithms, I would like to see competitions for a larger-block-size block cipher and a super-fast stream cipher as well.)

Two news articles.

White House Report on Big Data Discrimination

Post Syndicated from Bruce Schneier original

The White House has released a report on big-data discrimination. From the blog post:

Using case studies on credit lending, employment, higher education, and criminal justice, the report we are releasing today illustrates how big data techniques can be used to detect bias and prevent discrimination. It also demonstrates the risks involved, particularly how technologies can deliberately or inadvertently perpetuate, exacerbate, or mask discrimination.

The purpose of the report is not to offer remedies to the issues it raises, but rather to identify these issues and prompt conversation, research­ — and action­ — among technologists, academics, policy makers, and citizens, alike.

The report includes a number of recommendations for advancing work in this nascent field of data and ethics. These include investing in research, broadening and diversifying technical leadership, cross-training, and expanded literacy on data discrimination, bolstering accountability, and creating standards for use within both the government and the private sector. It also calls on computer and data science programs and professionals to promote fairness and opportunity as part of an overall commitment to the responsible and ethical use of data.

Own a Pair of Clipper Chips

Post Syndicated from Bruce Schneier original

The AT&T TSD was an early 1990s telephone encryption device. It was digital. Voice quality was okay. And it was the device that contained the infamous Clipper Chip, the U.S. government’s first attempt to put a back door into everyone’s communications.

Marcus Ranum is selling a pair on eBay. He has the description wrong, though. The TSD-3600-E is the model with the Clipper Chip in it. The TSD-3600-F is the version with the insecure exportable algorithm.

Credential Stealing as an Attack Vector

Post Syndicated from Bruce Schneier original

Traditional computer security concerns itself with vulnerabilities. We employ antivirus software to detect malware that exploits vulnerabilities. We have automatic patching systems to fix vulnerabilities. We debate whether the FBI should be permitted to introduce vulnerabilities in our software so it can get access to systems with a warrant. This is all important, but what’s missing is a recognition that software vulnerabilities aren’t the most common attack vector: credential stealing is.

The most common way hackers of all stripes, from criminals to hacktivists to foreign governments, break into networks is by stealing and using a valid credential. Basically, they steal passwords, set up man-in-the-middle attacks to piggy-back on legitimate logins, or engage in cleverer attacks to masquerade as authorized users. It’s a more effective avenue of attack in many ways: it doesn’t involve finding a zero-day or unpatched vulnerability, there’s less chance of discovery, and it gives the attacker more flexibility in technique.

Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) group — basically the country’s chief hacker — gave a rare public talk at a conference in January. In essence, he said that zero-day vulnerabilities are overrated, and credential stealing is how he gets into networks: “A lot of people think that nation states are running their operations on zero days, but it’s not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive.”

This is true for us, and it’s also true for those attacking us. It’s how the Chinese hackers breached the Office of Personnel Management in 2015. The 2014 criminal attack against Target Corporation started when hackers stole the login credentials of the company’s HVAC vendor. Iranian hackers stole US login credentials. And the hacktivist that broke into the cyber-arms manufacturer Hacking Team and published pretty much every proprietary document from that company used stolen credentials.

As Joyce said, stealing a valid credential and using it to access a network is easier, less risky, and ultimately more productive than using an existing vulnerability, even a zero-day.

Our notions of defense need to adapt to this change. First, organizations need to beef up their authentication systems. There are lots of tricks that help here: two-factor authentication, one-time passwords, physical tokens, smartphone-based authentication, and so on. None of these is foolproof, but they all make credential stealing harder.

Second, organizations need to invest in breach detection and — most importantly — incident response. Credential-stealing attacks tend to bypass traditional IT security software. But attacks are complex and multi-step. Being able to detect them in process, and to respond quickly and effectively enough to kick attackers out and restore security, is essential to resilient network security today.

Vulnerabilities are still critical. Fixing vulnerabilities is still vital for security, and introducing new vulnerabilities into existing systems is still a disaster. But strong authentication and robust incident response are also critical. And an organization that skimps on these will find itself unable to keep its networks secure.

This essay originally appeared on Xconomy.

Vulnerabilities in Samsung’s SmartThings

Post Syndicated from Bruce Schneier original

Interesting research: Earlence Fernandes, Jaeyeon Jung, and Atul Prakash, “Security Analysis of Emerging Smart Home Applications“:

Abstract: Recently, several competing smart home programming frameworks that support third party app development have emerged. These frameworks provide tangible benefits to users, but can also expose users to significant security risks. This paper presents the first in-depth empirical security analysis of one such emerging smart home programming platform. We analyzed Samsung-owned SmartThings, which has the largest number of apps among currently available smart home platforms, and supports a broad range of devices including motion sensors, fire alarms, and door locks. SmartThings hosts the application runtime on a proprietary, closed-source cloud backend, making scrutiny challenging. We overcame the challenge with a static source code analysis of 499 SmartThings apps (called SmartApps) and 132 device handlers, and carefully crafted test cases that revealed many undocumented features of the platform. Our key findings are twofold. First, although SmartThings implements a privilege separation model, we discovered two intrinsic design flaws that lead to significant overprivilege in SmartApps. Our analysis reveals that over 55% of SmartApps in the store are overprivileged due to the capabilities being too coarse-grained. Moreover, once installed, a SmartApp is granted full access to a device even if it specifies needing only limited access to the device. Second, the SmartThings event subsystem, which devices use to communicate asynchronously with SmartApps via events, does not sufficiently protect events that carry sensitive information such as lock codes. We exploited framework design flaws to construct four proof-of-concept attacks that: (1) secretly planted door lock codes; (2) stole existing door lock codes; (3) disabled vacation mode of the home; and (4) induced a fake fire alarm. We conclude the paper with security lessons for the design of emerging smart home programming frameworks.

Research website. News article — copy and paste into a text editor to avoid the ad blocker blocker.

EDITED TO ADD: Another article.

I’m Writing a Book on Security

Post Syndicated from Bruce Schneier original

I’m writing a book on security in the highly connected Internet-of-Things World. Tentative title:

Click Here to Kill Everybody
Peril and Promise in a Hyper-Connected World

There are two underlying metaphors in the book. The first is what I have called the World-Sized Web, which is that combination of mobile, cloud, persistence, personalization, agents, cyber-physical systems, and the Internet of Things. The second is what I’m calling the “war of all against all,” which is the recognition that security policy is a series of “wars” between various interests, and that any policy decision in any one of the wars affects all the others. I am not wedded to either metaphor at this point.

This is the current table of contents, with three of the chapters broken out into sub-chapters:

  • Introduction
  • The World-Sized Web
  • The Coming Threats
    • Privacy Threats
    • Availability and Integrity Threats
    • Threats from Software-Controlled Systems
    • Threats from Interconnected Systems
    • Threats from Automatic Algorithms
    • Threats from Autonomous Systems
    • Other Threats of New Technologies
    • Catastrophic Risk
    • Cyberwar
  • The Current Wars
    • The Copyright Wars
    • The US/EU Data Privacy Wars
    • The War for Control of the Internet
    • The War of Secrecy
  • The Coming Wars
    • The War for Your Data
    • The War Against Your Computers
    • The War for Your Embedded Computers
    • The Militarization of the Internet
    • The Powerful vs. the Powerless
    • The Rights of the Individual vs. the Rights of Society
  • The State of Security
  • Near-Term Solutions
  • Security for an Empowered World
  • Conclusion

That will change, of course. If the past is any guide, everything will change.

Questions: Am I missing any threats? Am I missing any wars?

Current schedule is for me to finish writing this book by the end of September, and have it published at the end of April 2017. I hope to have pre-publication copies available for sale at the RSA Conference next year. As with my previous book, Norton is the publisher.

So if you notice me blogging less this summer, this is why.

Documenting the Chilling Effects of NSA Surveillance

Post Syndicated from Bruce Schneier original

In Data and Goliath, I talk about the self-censorship that comes along with broad surveillance. This interesting research documents this phenomenon in Wikipedia: “Chilling Effects: Online Surveillance and Wikipedia Use,” by Jon Penney, Berkeley Technology Law Journal, 2016.

Abstract: This article discusses the results of the first empirical study providing evidence of regulatory “chilling effects” of Wikipedia users associated with online government surveillance. The study explores how traffic to Wikipedia articles on topics that raise privacy concerns for Wikipedia users decreased after the widespread publicity about NSA/PRISM surveillance revelations in June 2013. Using an interdisciplinary research design, the study tests the hypothesis, based on chilling effects theory, that traffic to privacy-sensitive Wikipedia articles reduced after the mass surveillance revelations. The Article finds not only a statistically significant immediate decline in traffic for these Wikipedia articles after June 2013, but also a change in the overall secular trend in the view count traffic, suggesting not only immediate but also long-term chilling effects resulting from the NSA/PRISM online surveillance revelations. These, and other results from the case study, not only offer compelling evidence for chilling effects associated with online surveillance, but also offer important insights about how we should understand such chilling effects and their scope, including how they interact with other dramatic or significant events (like war and conflict) and their broader implications for privacy, U.S. constitutional litigation, and the health of democratic society. This study is among the first to demonstrate — using either Wikipedia data or web traffic data more generally­ how government surveillance and similar actions impact online activities, including access to information and knowledge online.

Two news stories.

Amazon Unlimited Fraud

Post Syndicated from Bruce Schneier original

Amazon Unlimited is a all-you-can-read service. You pay one price and can read anything that’s in the program. Amazon pays authors out of a fixed pool, on the basis of how many people read their books. More interestingly, it pays by the page. An author makes more money if someone reads his book through to page 200 than if they give up at page 50, and even more if they make it through to the end. This makes sense; it doesn’t pay authors for books people download but don’t read, or read the first few pages of and then decide not to read the rest.

This payment structure requires surveillance, and the Kindle does watch people as they read. The problem is that the Kindle doesn’t know if the reader actually reads the book — only what page they’re on. So Kindle Unlimited records the furthest page the reader synched, and pays based on that.

This opens up the possibility for fraud. If an author can create a thousand-page book and trick the reader into reading page 1,000, he gets paid the maximum. Scam authors are doing this through a variety of tricks.

What’s interesting is what while Amazon is definitely concerned about this kind of fraud, it doesn’t affect its bottom line. The fixed payment pool doesn’t change; just who gets how much of it does.

EDITED TO ADD: John Scalzi comments.

People Trust Robots, Even When They Don’t Inspire Trust

Post Syndicated from Bruce Schneier original

Interesting research:

In the study, sponsored in part by the Air Force Office of Scientific Research (AFOSR), the researchers recruited a group of 42 volunteers, most of them college students, and asked them to follow a brightly colored robot that had the words “Emergency Guide Robot” on its side. The robot led the study subjects to a conference room, where they were asked to complete a survey about robots and read an unrelated magazine article. The subjects were not told the true nature of the research project.

In some cases, the robot — which was controlled by a hidden researcher — led the volunteers into the wrong room and traveled around in a circle twice before entering the conference room. For several test subjects, the robot stopped moving, and an experimenter told the subjects that the robot had broken down. Once the subjects were in the conference room with the door closed, the hallway through which the participants had entered the building was filled with artificial smoke, which set off a smoke alarm.

When the test subjects opened the conference room door, they saw the smoke – and the robot, which was then brightly-lit with red LEDs and white “arms” that served as pointers. The robot directed the subjects to an exit in the back of the building instead of toward the doorway – marked with exit signs – that had been used to enter the building.

“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency,” said Paul Robinette, a GTRI research engineer who conducted the study as part of his doctoral dissertation. “Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.”

The researchers surmise that in the scenario they studied, the robot may have become an “authority figure” that the test subjects were more likely to trust in the time pressure of an emergency. In simulation-based research done without a realistic emergency scenario, test subjects did not trust a robot that had previously made mistakes.

Our notions of trust depend on all sorts of cues that have nothing to do with actual trustworthiness. I would be interested in seeing where the robot fits in in the continuum of authority figures. Is it trusted more or less than a man in a hazmat suit? A woman in a business suit? An obviously panicky student? How do different looking robots fare?

News article. Research paper.

BlackBerry’s Global Encryption Key

Post Syndicated from Bruce Schneier original

Last week, there was a big news story about the BlackBerry encryption key. The news was that all BlackBerry devices share a global encryption key, and that the Canadian RCMP has a copy of it. Stupid design, certainly, but it’s not news. As the Register points out, this has been repeatedly reported on since 2010.

And note that this only holds for a individual users. If your organization uses a BlackBerry Enterprise Server (BES), you have your own unique key.