Tag Archives: psychologyofsecurity

Security and Human Behavior (SHB) 2019

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/security_and_hu_8.html

Today is the second day of the twelfth Workshop on Security and Human Behavior, which I am hosting at Harvard University.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to 7-10 minutes. The rest of the time is left to open discussion. Four hour-and-a-half panels per day over two days equals eight panels; six people per panel means that 48 people get to speak. We also have lunches, dinners, and receptions — all designed so people from different disciplines talk to each other.

I invariably find this to be the most intellectually stimulating two days of my professional year. It influences my thinking in many different, and sometimes surprising, ways.

This year’s program is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks — remotely, because he was denied a visa earlier this year.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, and eleventh SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

Programmers Who Don’t Understand Security Are Poor at Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/programmers_who.html

A university study confirmed the obvious: if you pay a random bunch of freelance programmers a small amount of money to write security software, they’re not going to do a very good job at it.

In an experiment that involved 43 programmers hired via the Freelancer.com platform, University of Bonn academics have discovered that developers tend to take the easy way out and write code that stores user passwords in an unsafe manner.

For their study, the German academics asked a group of 260 Java programmers to write a user registration system for a fake social network.

Of the 260 developers, only 43 took up the job, which involved using technologies such as Java, JSF, Hibernate, and PostgreSQL to create the user registration component.

Of the 43, academics paid half of the group with €100, and the other half with €200, to determine if higher pay made a difference in the implementation of password security features.

Further, they divided the developer group a second time, prompting half of the developers to store passwords in a secure manner, and leaving the other half to store passwords in their preferred method — hence forming four quarters of developers paid €100 and prompted to use a secure password storage method (P100), developers paid €200 and prompted to use a secure password storage method (P200), devs paid €100 but not prompted for password security (N100), and those paid €200 but not prompted for password security (N200).

I don’t know why anyone would expect this group of people to implement a good secure password system. Look at how they were hired. Look at the scope of the project. Look at what they were paid. I’m sure they grabbed the first thing they found on GitHub that did the job.

I’m not very impressed with the study or its conclusions.

Measuring the Rationality of Security Decisions

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/08/measuring_the_r.html

Interesting research: “Dancing Pigs or Externalities? Measuring the Rationality of
Security Decisions
“:

Abstract: Accurately modeling human decision-making in security is critical to thinking about when, why, and how to recommend that users adopt certain secure behaviors. In this work, we conduct behavioral economics experiments to model the rationality of end-user security decision-making in a realistic online experimental system simulating a bank account. We ask participants to make a financially impactful security choice, in the face of transparent risks of account compromise and benefits offered by an optional security behavior (two-factor authentication). We measure the cost and utility of adopting the security behavior via measurements of time spent executing the behavior and estimates of the participant’s wage. We find that more than 50% of our participants made rational (e.g., utility optimal) decisions, and we find that participants are more likely to behave rationally in the face of higher risk. Additionally, we find that users’ decisions can be modeled well as a function of past behavior (anchoring effects), knowledge of costs, and to a lesser extent, users’ awareness of risks and context (R2=0.61). We also find evidence of endowment effects, as seen in other areas of economic and psychological decision-science literature, in our digital-security setting. Finally, using our data, we show theoretically that a “one-size-fits-all” emphasis on security can lead to market losses, but that adoption by a subset of users with higher risks or lower costs can lead to market gains

Conservation of Threat

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/conservation_of.html

Here’s some interesting research about how we perceive threats. Basically, as the environment becomes safer we basically manufacture new threats. From an essay about the research:

To study how concepts change when they become less common, we brought volunteers into our laboratory and gave them a simple task ­– to look at a series of computer-generated faces and decide which ones seem “threatening.” The faces had been carefully designed by researchers to range from very intimidating to very harmless.

As we showed people fewer and fewer threatening faces over time, we found that they expanded their definition of “threatening” to include a wider range of faces. In other words, when they ran out of threatening faces to find, they started calling faces threatening that they used to call harmless. Rather than being a consistent category, what people considered “threats” depended on how many threats they had seen lately.

This has a lot of implications in security systems where humans have to make judgments about threat and risk: TSA agents, police noticing “suspicious” activities, “see something say something” campaigns, and so on.

The academic paper.

Security and Human Behavior (SHB 2018)

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/security_and_hu_7.html

I’m at Carnegie Mellon University, at the eleventh Workshop on Security and Human Behavior.

SHB is a small invitational gathering of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to 7-10 minutes. The rest of the time is left to open discussion. Four hour-and-a-half panels per day over two days equals eight panels; six people per panel means that 48 people get to speak. We also have lunches, dinners, and receptions — all designed so people from different disciplines talk to each other.

I invariably find this to be the most intellectually stimulating conference of my year. It influences my thinking in many different, and sometimes surprising, ways.

This year’s program is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks. (Ross also maintains a good webpage of psychology and security resources.)

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, and tenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops.

Next year, I’ll be hosting the event at Harvard.

The Science of Interrogation

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/10/the_science_of_.html

Fascinating article about two psychologists who are studying interrogation techniques.

Now, two British researchers are quietly revolutionising the study and practice of interrogation. Earlier this year, in a meeting room at the University of Liverpool, I watched a video of the Diola interview alongside Laurence Alison, the university’s chair of forensic psychology, and Emily Alison, a professional counsellor. My permission to view the tape was negotiated with the counter-terrorist police, who are understandably wary of allowing outsiders access to such material. Details of the interview have been changed to protect the identity of the officers involved, though the quotes are verbatim.

The Alisons, husband and wife, have done something no scholars of interrogation have been able to do before. Working in close cooperation with the police, who allowed them access to more than 1,000 hours of tapes, they have observed and analysed hundreds of real-world interviews with terrorists suspected of serious crimes. No researcher in the world has ever laid hands on such a haul of data before. Based on this research, they have constructed the world’s first empirically grounded and comprehensive model of interrogation tactics.

The Alisons’ findings are changing the way law enforcement and security agencies approach the delicate and vital task of gathering human intelligence. “I get very little, if any, pushback from practitioners when I present the Alisons’ work,” said Kleinman, who now teaches interrogation tactics to military and police officers. “Even those who don’t have a clue about the scientific method, it just resonates with them.” The Alisons have done more than strengthen the hand of advocates of non-coercive interviewing: they have provided an unprecedentedly authoritative account of what works and what does not, rooted in a profound understanding of human relations. That they have been able to do so is testament to a joint preoccupation with police interviews that stretches back more than 20 years.

Research on What Motivates ISIS — and Other — Fighters

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/research_on_wha.html

Interesting research from Nature Human Behaviour: “The devoted actor’s will to fight and the spiritual dimension of human conflict“:

Abstract: Frontline investigations with fighters against the Islamic State (ISIL or ISIS), combined with multiple online studies, address willingness to fight and die in intergroup conflict. The general focus is on non-utilitarian aspects of human conflict, which combatants themselves deem ‘sacred’ or ‘spiritual’, whether secular or religious. Here we investigate two key components of a theoretical framework we call ‘the devoted actor’ — sacred values and identity fusion with a group­ — to better understand people’s willingness to make costly sacrifices. We reveal three crucial factors: commitment to non-negotiable sacred values and the groups that the actors are wholly fused with; readiness to forsake kin for those values; and perceived spiritual strength of ingroup versus foes as more important than relative material strength. We directly relate expressed willingness for action to behaviour as a check on claims that decisions in extreme conflicts are driven by cost-benefit calculations, which may help to inform policy decisions for the common defense.

Millennials and Secret Leaking

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/millennials_and_1.html

I hesitate to blog this, because it’s an example of everything that’s wrong with pop psychology. Malcolm Harris writes about millennials, and has a theory of why millennials leak secrets. My guess is that you could write a similar essay about every named generation, every age group, and so on.

Security and Human Behavior (SHB 2017)

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/security_and_hu_6.html

I’m in Cambridge University, at the tenth Workshop on Security and Human Behavior.

SHB is a small invitational gathering of people studying various aspects of the human side of security, organized each year by Ross Anderson, Alessandro Acquisti, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, political scientists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is maximum interaction and discussion. We do that by putting everyone on panels. There are eight six-person panels over the course of the two days. Everyone gets to talk for ten minutes about their work, and then there’s half an hour of questions and discussion. We also have lunches, dinners, and receptions — all designed so people from different disciplines talk to each other.

It’s the most intellectually stimulating conference of my year, and influences my thinking about security in many different ways.

This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, and ninth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops.

I don’t think any of us imagined that this conference would be around this long.

How the Media Influences Our Fear of Terrorism

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/01/how_the_media_i.html

Good article that crunches the data and shows that the press’s coverage of terrorism is disproportional to its comparative risk.

This isn’t new. I’ve written about it before, and wrote about it more generally when I wrote about the psychology of risk, fear, and security. Basically, the issue is the availability heuristic. We tend to infer the probability of something by how easy it is to bring examples of the thing to mind. So if we can think of a lot of tiger attacks in our community, we infer that the risk is high. If we can’t think of many lion attacks, we infer that the risk is low. But while this is a perfectly reasonable heuristic when living in small family groups in the East African highlands in 100,000 BC, it fails in the face of modern media. The media makes the rare seem more common by spending a lot of time talking about it. It’s not the media’s fault. By definition, news is “something that hardly ever happens.” But when the coverage of terrorist deaths exceeds the coverage of homicides, we have a tendency to mistakenly inflate the risk of the former while discount the risk of the latter.

Our brains aren’t very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are. We fear them more than probability indicates we should.

There is a lot of psychological research that tries to explain this, but one of the key findings is this: People tend to base risk analysis more on stories than on data. Stories engage us at a much more visceral level, especially stories that are vivid, exciting or personally involving.

If a friend tells you about getting mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than reading a page of abstract crime statistics will.

Novelty plus dread plus a good story equals overreaction.

It’s not just murders. It’s flying vs. driving: the former is much safer, but the latter is more spectacular when it occurs.

Are We Becoming More Moral Faster Than We’re Becoming More Dangerous?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/01/are_we_becoming.html

In The Better Angels of Our Nature, Steven Pinker convincingly makes the point that by pretty much every measure you can think of, violence has declined on our planet over the long term. More generally, “the world continues to improve in just about every way.” He’s right, but there are two important caveats.

One, he is talking about the long term. The trend lines are uniformly positive across the centuries and mostly positive across the decades, but go up and down year to year. While this is an important development for our species, most of us care about changes year to year — and we can’t make any predictions about whether this year will be better or worse than last year in any individual measurement.

The second caveat is both more subtle and more important. In 2013, I wrote about how technology empowers attackers. By this measure, the world is getting more dangerous:

Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious… and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.

This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.

Pinker’s trends are based both on increased societal morality and better technology, and both are based on averages: the average person with the average technology. My increased attack capability trend is based on those two trends as well, but on the outliers: the most extreme person with the most extreme technology. Pinker’s trends are noisy, but over the long term they’re strongly linear. Mine seem to be exponential.

When Pinker expresses optimism that the overall trends he identifies will continue into the future, he’s making a bet. He’s betting that his trend lines and my trend lines won’t cross. That is, that our society’s gradual improvement in overall morality will continue to outpace the potentially exponentially increasing ability of the extreme few to destroy everything. I am less optimistic:

But the problem isn’t that these security measures won’t work — even as they shred our freedoms and liberties — it’s that no security is perfect.

Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We’ll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.

As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of someone in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn’t kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?

Clearly we’re not at the point yet where any of these disaster scenarios have come to pass, and Pinker rightly expresses skepticism when he says that historical doomsday scenarios have so far never come to pass. But that’s the thing about exponential curves; it’s hard to predict the future from the past. So either I have discovered a fundamental problem with any intelligent individualistic species and have therefore explained the Fermi Paradox, or there is some other factor in play that will ensure that the two trend lines won’t cross.

Confusing Security Risks with Moral Judgments

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/confusing_secur.html

Interesting research that shows we exaggerate the risks of something when we find it morally objectionable.

From an article about and interview with the researchers:

To get at this question experimentally, Thomas and her collaborators created a series of vignettes in which a parent left a child unattended for some period of time, and participants indicated the risk of harm to the child during that period. For example, in one vignette, a 10-month-old was left alone for 15 minutes, asleep in the car in a cool, underground parking garage. In another vignette, an 8-year-old was left for an hour at a Starbucks, one block away from her parent’s location.

To experimentally manipulate participants’ moral attitude toward the parent, the experimenters varied the reason the child was left unattended across a set of six experiments with over 1,300 online participants. In some cases, the child was left alone unintentionally (for example, in one case, a mother is hit by a car and knocked unconscious after buckling her child into her car seat, thereby leaving the child unattended in the car seat). In other cases, the child was left unattended so the parent could go to work, do some volunteering, relax or meet a lover.

Not surprisingly, the parent’s reason for leaving a child unattended affected participants’ judgments of whether the parent had done something immoral: Ratings were over 3 on a 10-point scale even when the child was left unattended unintentionally, but they skyrocketed to nearly 8 when the parent left to meet a lover. Ratings for the other cases fell in between.

The more surprising result was that perceptions of risk followed precisely the same pattern. Although the details of the cases were otherwise the same -­ that is, the age of the child, the duration and location of the unattended period, and so on -­ participants thought children were in significantly greater danger when the parent left to meet a lover than when the child was left alone unintentionally. The ratings for the other cases, once again, fell in between. In other words, participants’ factual judgments of how much danger the child was in while the parent was away varied according to the extent of their moral outrage concerning the parent’s reason for leaving.

Scott Atran on Why People Become Terrorists

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/scott_atran_on_.html

Scott Atran has done some really interesting research on why ordinary people become terrorists.

Academics who study warfare and terrorism typically don’t conduct research just kilometers from the front lines of battle. But taking the laboratory to the fight is crucial for figuring out what impels people to make the ultimate sacrifice to, for example, impose Islamic law on others, says Atran, who is affiliated with the National Center for Scientific Research in Paris.

Atran’s war zone research over the last few years, and interviews during the last decade with members of various groups engaged in militant jihad (or holy war in the name of Islamic law), give him a gritty perspective on this issue. He rejects popular assumptions that people frequently join up, fight and die for terrorist groups due to mental problems, poverty, brainwashing or savvy recruitment efforts by jihadist organizations.

Instead, he argues, young people adrift in a globalized world find their own way to ISIS, looking to don a social identity that gives their lives significance. Groups of dissatisfied young adult friends around the world ­ often with little knowledge of Islam but yearning for lives of profound meaning and glory ­ typically choose to become volunteers in the Islamic State army in Syria and Iraq, Atran contends. Many of these individuals connect via the internet and social media to form a global community of alienated youth seeking heroic sacrifice, he proposes.

Preliminary experimental evidence suggests that not only global terrorism, but also festering state and ethnic conflicts, revolutions and even human rights movements — think of the U.S. civil rights movement in the 1960s — depend on what Atran refers to as devoted actors. These individuals, he argues, will sacrifice themselves, their families and anyone or anything else when a volatile mix of conditions are in play. First, devoted actors adopt values they regard as sacred and nonnegotiable, to be defended at all costs. Then, when they join a like-minded group of nonkin that feels like a family ­ a band of brothers ­ a collective sense of invincibility and special destiny overwhelms feelings of individuality. As members of a tightly bound group that perceives its sacred values under attack, devoted actors will kill and die for each other.

Paper.

EDITED TO ADD (8/13): Related paper, also by Atran.

Situational Awareness and Crime Prevention

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/06/situational_awa.html

Ronald V. Clarke argues for more situational awareness in crime prevention. Turns out if you make crime harder, it goes down. And this has profound policy implications.

Whatever the benefits for Criminology, the real benefits of a greater focus on crime than criminality would be for crime policy. The fundamental attribution error is the main impediment to formulating a broader set of policies to control crime. Nearly everyone believes that the best way to control crime is to prevent people from developing into criminals in the first place or, failing that, to use the criminal justice system to deter or rehabilitate them. This has led directly to overuse of the system at vast human and economic cost.

Hardly anyone recognizes–whether politicians, public intellectuals, government policy makers, police or social workers–that focusing on the offender is dealing with only half the problem. We need also to deal with the many and varied ways in which society inadvertently creates the opportunities for crime that motivated offenders exploit by (i) manufacturing crime-prone goods, (ii) practicing poor management in many spheres of everyday life, (iii) permitting poor layout and design of places, (iv) neglecting the security of the vast numbers of electronic systems that regulate our everyday lives and, (v) enacting laws with unintended benefits for crime.

Situational prevention has accumulated dozens of successes in chipping away at some of the problems created by these conditions, which attests to the principles formulated so many years ago in Home Office research. Much more surprising, however, is that the same thing has been happening in every sector of modern life without any assistance from governments or academics. I am referring to the security measures that hundreds, perhaps thousands, of private and public organizations have been taking in the past 2-3 decades to protect themselves from crime.