Tag Archives: securitypolicies

Russia is Banning Telegram

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/russia_is_banni.html

Russia has banned the secure messaging app Telegram. It’s making an absolute mess of the ban — blocking 16 million IP addresses, many belonging to the Amazon and Google clouds — and it’s not even clear that it’s working. But, more importantly, I’m not convinced Telegram is secure in the first place.

Such a weird story. If you want secure messaging, use Signal. If you’re concerned that having Signal on your phone will itself arouse suspicion, use WhatsApp.

E-Mailing Private HTTPS Keys

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/e-mailing_priva.html

I don’t know what to make of this story:

The email was sent on Tuesday by the CEO of Trustico, a UK-based reseller of TLS certificates issued by the browser-trusted certificate authorities Comodo and, until recently, Symantec. It was sent to Jeremy Rowley, an executive vice president at DigiCert, a certificate authority that acquired Symantec’s certificate issuance business after Symantec was caught flouting binding industry rules, prompting Google to distrust Symantec certificates in its Chrome browser. In communications earlier this month, Trustico notified DigiCert that 50,000 Symantec-issued certificates Trustico had resold should be mass revoked because of security concerns.

When Rowley asked for proof the certificates were compromised, the Trustico CEO emailed the private keys of 23,000 certificates, according to an account posted to a Mozilla security policy forum. The report produced a collective gasp among many security practitioners who said it demonstrated a shockingly cavalier treatment of the digital certificates that form one of the most basic foundations of website security.

Generally speaking, private keys for TLS certificates should never be archived by resellers, and, even in the rare cases where such storage is permissible, they should be tightly safeguarded. A CEO being able to attach the keys for 23,000 certificates to an email raises troubling concerns that those types of best practices weren’t followed.

I am croggled by the multiple layers of insecurity here.

BoingBoing post.

The TSA’s Selective Laptop Ban

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/the_tsas_select.html

Last Monday, the TSA announced a peculiar new security measure to take effect within 96 hours. Passengers flying into the US on foreign airlines from eight Muslim countries would be prohibited from carrying aboard any electronics larger than a smartphone. They would have to be checked and put into the cargo hold. And now the UK is following suit.

It’s difficult to make sense of this as a security measure, particularly at a time when many people question the veracity of government orders, but other explanations are either unsatisfying or damning.

So let’s look at the security aspects of this first. Laptop computers aren’t inherently dangerous, but they’re convenient carrying boxes. This is why, in the past, TSA officials have demanded passengers turn their laptops on: to confirm that they’re actually laptops and not laptop cases emptied of their electronics and then filled with explosives.

Forcing a would-be bomber to put larger laptops in the plane’s hold is a reasonable defense against this threat, because it increases the complexity of the plot. Both the shoe-bomber Richard Reid and the underwear bomber Umar Farouk Abdulmutallab carried crude bombs aboard their planes with the plan to set them off manually once aloft. Setting off a bomb in checked baggage is more work, which is why we don’t see more midair explosions like Pan Am Flight 103 over Lockerbie, Scotland, in 1988.

Security measures that restrict what passengers can carry onto planes are not unprecedented either. Airport security regularly responds to both actual attacks and intelligence regarding future attacks. After the liquid bombers were captured in 2006, the British banned all carry-on luggage except passports and wallets. I remember talking with a friend who traveled home from London with his daughters in those early weeks of the ban. They reported that airport security officials confiscated every tube of lip balm they tried to hide.

Similarly, the US started checking shoes after Reid, installed full-body scanners after Abdulmutallab and restricted liquids in 2006. But all of those measure were global, and most lessened in severity as the threat diminished.

This current restriction implies some specific intelligence of a laptop-based plot and a temporary ban to address it. However, if that’s the case, why only certain non-US carriers? And why only certain airports? Terrorists are smart enough to put a laptop bomb in checked baggage from the Middle East to Europe and then carry it on from Europe to the US.

Why not require passengers to turn their laptops on as they go through security? That would be a more effective security measure than forcing them to check them in their luggage. And lastly, why is there a delay between the ban being announced and it taking effect?

Even more confusing, the New York Times reported that “officials called the directive an attempt to address gaps in foreign airport security, and said it was not based on any specific or credible threat of an imminent attack.” The Department of Homeland Security FAQ page makes this general statement, “Yes, intelligence is one aspect of every security-related decision,” but doesn’t provide a specific security threat. And yet a report from the UK states the ban “follows the receipt of specific intelligence reports.”

Of course, the details are all classified, which leaves all of us security experts scratching our heads. On the face of it, the ban makes little sense.

One analysis painted this as a protectionist measure targeted at the heavily subsidized Middle Eastern airlines by hitting them where it hurts the most: high-paying business class travelers who need their laptops with them on planes to get work done. That reasoning makes more sense than any security-related explanation, but doesn’t explain why the British extended the ban to UK carriers as well. Or why this measure won’t backfire when those Middle Eastern countries turn around and ban laptops on American carriers in retaliation. And one aviation official told CNN that an intelligence official informed him it was not a “political move.”

In the end, national security measures based on secret information require us to trust the government. That trust is at historic low levels right now, so people both in the US and other countries are rightly skeptical of the official unsatisfying explanations. The new laptop ban highlights this mistrust.

This essay previously appeared on CNN.com.

EDITED TO ADD: Here are two essays that look at the possible political motivations, and fallout, of this ban. And the EFF rightly points out that letting a laptop out of your hands and sight is itself a security risk — for the passenger.

Photocopier Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/01/photocopier_sec.html

A modern photocopier is basically a computer with a scanner and printer attached. This computer has a hard drive, and scans of images are regularly stored on that drive. This means that when a photocopier is thrown away, that hard drive is filled with pages that the machine copied over its lifetime. As you might expect, some of those pages will contain sensitive information.

This 2011 report was written by the Inspector General of the National Archives and Records Administration (NARA). It found that the organization did nothing to safeguard its photocopiers.

Our audit found that opportunities exist to strengthen controls to ensure photocopier hard drives are protected from potential exposure. Specifically, we found the following weaknesses.

  • NARA lacks appropriate controls to ensure all photocopiers across the agency are accounted for and that any hard drives residing on these machines are tracked and properly sanitized or destroyed prior to disposal.
  • There are no policies documenting security measures to be taken for photocopiers utilized for general use nor are there procedures to ensure photocopier hard drives are sanitized or destroyed prior to disposal or at the end of the lease term.

  • Photocopier lease agreements and contracts do not include a “keep disk”1 or similar clause as required by NARA’s IT Security Methodology for Media Protection Policy version 5.1.

I don’t mean to single this organization out. Pretty much no one thinks about this security threat.

Guessing Credit Card Security Details

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/12/guessing_credit.html

Researchers have found that they can guess various credit-card-number security details by spreading their guesses around multiple websites so as not to trigger any alarms.

From a news article:

Mohammed Ali, a PhD student at the university’s School of Computing Science, said: “This sort of attack exploits two weaknesses that on their own are not too severe but when used together, present a serious risk to the whole payment system.

“Firstly, the current online payment system does not detect multiple invalid payment requests from different websites.

“This allows unlimited guesses on each card data field, using up to the allowed number of attempts — typically 10 or 20 guesses — on each website.

“Secondly, different websites ask for different variations in the card data fields to validate an online purchase. This means it’s quite easy to build up the information and piece it together like a jigsaw.

“The unlimited guesses, when combined with the variations in the payment data fields make it frighteningly easy for attackers to generate all the card details one field at a time.

“Each generated card field can be used in succession to generate the next field and so on. If the hits are spread across enough websites then a positive response to each question can be received within two seconds — just like any online payment.

“So even starting with no details at all other than the first six digits — which tell you the bank and card type and so are the same for every card from a single provider — a hacker can obtain the three essential pieces of information to make an online purchase within as little as six seconds.”

That’s card number, expiration date, and CVV code.

From the paper:

Abstract: This article provides an extensive study of the current practice of online payment using credit and debit cards, and the intrinsic security challenges caused by the differences in how payment sites operate. We investigated the Alexa top-400 online merchants’ payment sites, and realised that the current landscape facilitates a distributed guessing attack. This attack subverts the payment functionality from its intended purpose of validating card details, into helping the attackers to generate all security data fields required to make online transactions. We will show that this attack would not be practical if all payment sites performed the same security checks. As part of our responsible disclosure measure, we notified a selection of payment sites about our findings, and we report on their responses. We will discuss potential solutions to the problem and the practical difficulty to implement these, given the varying technical and business concerns of the involved parties.

BoingBoing post:

The researchers believe this method has already been used in the wild, as part of a spectacular hack against Tesco bank last month.

MasterCard is immune to this hack because they detect the guesses, even though they’re distributed across multiple websites. Visa is not.

How Different Stakeholders Frame Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/how_different_s.html

Josephine Wolff examines different Internet governance stakeholders and how they frame security debates.

Her conclusion:

The tensions that arise around issues of security among different groups of internet governance stakeholders speak to the many tangled notions of what online security is and whom it is meant to protect that are espoused by the participants in multistakeholder governance forums. What makes these debates significant and unique in the context of internet governance is not that the different stakeholders often disagree (indeed, that is a common occurrence), but rather that they disagree while all using the same vocabulary of security to support their respective stances. Government stakeholders advocate for limitations on WHOIS privacy/proxy services in order to aid law enforcement and protect their citizens from crime and fraud. Civil society stakeholders advocate against those limitations in order to aid activists and minorities and protect those online users from harassment. Both sides would claim that their position promotes a more secure internet and a more secure society — ­and in a sense, both would be right, except that each promotes a differently secure internet and society, protecting different classes of people and behaviour from different threats.

While vague notions of security may be sufficiently universally accepted as to appear in official documents and treaties, the specific details of individual decisions­ — such as the implementation of dotless domains, changes to the WHOIS database privacy policy, and proposals to grant government greater authority over how their internet traffic is routed­ — require stakeholders to disentangle the many different ideas embedded in that language. For the idea of security to truly foster cooperation and collaboration as a boundary object in internet governance circles, the participating stakeholders will have to more concretely agree on what their vision of a secure internet is and how it will balance the different ideas of security espoused by different groups. Alternatively, internet governance stakeholders may find it more useful to limit their discussions on security, as a whole, and try to force their discussions to focus on more specific threats and issues within that space as a means of preventing themselves from succumbing to a façade of agreement without grappling with the sources of disagreement that linger just below the surface.

The intersection of multistakeholder internet governance and definitional issues of security is striking because of the way that the multistakeholder model both reinforces and takes advantage of the ambiguity surrounding the idea of security explored in the security studies literature. That ambiguity is a crucial component of maintaining a functional multistakeholder model of governance because it lends itself well to high-level agreements and discussions, contributing to the sense of consensus building across stakeholders. At the same time, gathering those different stakeholders together to decide specific issues related to the internet and its infrastructure brings to a fore the vast variety of definitions of security they employ and forces them to engage in security-versus-security fights, with each trying to promote their own particular notion of security. Security has long been a contested concept, but rarely do these contestations play out as directly and dramatically as in the multistakeholder arena of internet governance, where all parties are able to face off on what really constitutes security in a digital world.

We certainly saw this in the “going dark” debate: e.g. the FBI vs. Apple and their iPhone security.

The Hacking of Yahoo

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/the_hacking_of_.html

Last week, Yahoo! announced that it was hacked pretty massively in 2014. Over half a billion usernames and passwords were affected, making this the largest data breach of all time.

Yahoo! claimed it was a government that did it:

A recent investigation by Yahoo! Inc. has confirmed that a copy of certain user account information was stolen from the company’s network in late 2014 by what it believes is a state-sponsored actor.

I did a bunch of press interviews after the hack, and repeatedly said that “state-sponsored actor” is often code for “please don’t blame us for our shoddy security because it was a really sophisticated attacker and we can’t be expected to defend ourselves against that.”

Well, it turns out that Yahoo! had shoddy security and it was a bunch of criminals that hacked them. The first story is from the New York Times, and outlines the many ways Yahoo! ignored security issues.

But when it came time to commit meaningful dollars to improve Yahoo’s security infrastructure, Ms. Mayer repeatedly clashed with Mr. Stamos, according to the current and former employees. She denied Yahoo’s security team financial resources and put off proactive security defenses, including intrusion-detection mechanisms for Yahoo’s production systems.

The second story is from the Wall Street Journal:

InfoArmor said the hackers, whom it calls “Group E,” have sold the entire Yahoo database at least three times, including one sale to a state-sponsored actor. But the hackers are engaged in a moneymaking enterprise and have “a significant criminal track record,” selling data to other criminals for spam or to affiliate marketers who aren’t acting on behalf of any government, said Andrew Komarov, chief intelligence officer with InfoArmor Inc.

That is not the profile of a state-sponsored hacker, Mr. Komarov said. “We don’t see any reason to say that it’s state sponsored,” he said. “Their clients are state sponsored, but not the actual hackers.”

Research on the Timing of Security Warnings

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/research_on_the_2.html

fMRI experiments show that we are more likely to ignore security warnings when they interrupt other tasks.

A new study from BYU, in collaboration with Google Chrome engineers, finds the status quo of warning messages appearing haphazardly­ — while people are typing, watching a video, uploading files, etc.­ — results in up to 90 percent of users disregarding them.

Researchers found these times are less effective because of “dual task interference,” a neural limitation where even simple tasks can’t be simultaneously performed without significant performance loss. Or, in human terms, multitasking.

“We found that the brain can’t handle multitasking very well,” said study coauthor and BYU information systems professor Anthony Vance. “Software developers categorically present these messages without any regard to what the user is doing. They interrupt us constantly and our research shows there’s a high penalty that comes by presenting these messages at random times.”

[…]

For part of the study, researchers had participants complete computer tasks while an fMRI scanner measured their brain activity. The experiment showed neural activity was substantially reduced when security messages interrupted a task, as compared to when a user responded to the security message itself.

The BYU researchers used the functional MRI data as they collaborated with a team of Google Chrome security engineers to identify better times to display security messages during the browsing experience.

Research paper. News article.

Frequent Password Changes Is a Bad Security Idea

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/frequent_passwo.html

I’ve been saying for years that it’s bad security advice, that it encourages poor passwords. Lorrie Cranor, now the FTC’s chief technologist, agrees:

By studying the data, the researchers identified common techniques account holders used when they were required to change passwords. A password like “tarheels#1”, for instance (excluding the quotation marks) frequently became “tArheels#1” after the first change, “taRheels#1” on the second change and so on. Or it might be changed to “tarheels#11” on the first change and “tarheels#111” on the second. Another common technique was to substitute a digit to make it “tarheels#2”, “tarheels#3”, and so on.

“The UNC researchers said if people have to change their passwords every 90 days, they tend to use a pattern and they do what we call a transformation,” Cranor explained. “They take their old passwords, they change it in some small way, and they come up with a new password.”

The researchers used the transformations they uncovered to develop algorithms that were able to predict changes with great accuracy. Then they simulated real-world cracking to see how well they performed. In online attacks, in which attackers try to make as many guesses as possible before the targeted network locks them out, the algorithm cracked 17 percent of the accounts in fewer than five attempts. In offline attacks performed on the recovered hashes using superfast computers, 41 percent of the changed passwords were cracked within three seconds.

That data refers to this study.

My advice for choosing a secure password is here.

Security Effectiveness of the Israeli West Bank Barrier

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/security_effect.html

Interesting analysis:

Abstract: Objectives — Informed by situational crime prevention (SCP) this study evaluates the effectiveness of the “West Bank Barrier” that the Israeli government began to construct in 2002 in order to prevent suicide bombing attacks.

Methods — Drawing on crime wave models of past SCP research, the study uses a time series of terrorist attacks and fatalities and their location in respect to the Barrier, which was constructed in different sections over different periods of time, between 1999 and 2011.

Results — The Barrier together with associated security activities was effective in preventing suicide bombings and other attacks and fatalities with little if any apparent displacement. Changes in terrorist behavior likely resulted from the construction of the Barrier, not from other external factors or events.

Conclusions — In some locations, terrorists adapted to changed circumstances by committing more opportunistic attacks that require less planning. Fatalities and attacks were also reduced on the Palestinian side of the Barrier, producing an expected “diffusion of benefits” though the amount of reduction was considerably more than in past SCP studies. The defensive roles of the Barrier and offensive opportunities it presents, are identified as possible explanations. The study highlights the importance of SCP in crime and counter-terrorism policy.

Unfortunately, the whole paper is behind a paywall.

Note: This is not a political analysis of the net positive and negative effects of the wall, just a security analysis. Of course any full analysis needs to take the geopolitics into account. The comment section is not the place for this broader discussion.

Security Behavior of Pro-ISIS Groups on Social Media

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/06/security_behavi.html

Interesting:

Since the team had tracked these groups daily, researchers could observe the tactics that pro-ISIS groups use to evade authorities. They found that 15 percent of groups changed their names during the study period, and 7 percent flipped their visibility from public to members only. Another 4 percent underwent what the researchers called reincarnation. That means the group disappeared completely but popped up later under a new name and earned more than 60 percent of its original followers back.

The researchers compared these behaviors in the pro-ISIS groups to the behaviors of other social groups made up of protestors or social activists (the entire project began in 2013 with a focus on predicting periods of social unrest). The pro-ISIS groups employed more of these strategies, presumably because the groups were under more pressure to evolve as authorities sought to shut them down.

Research paper.

Detecting Explosives

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/05/detecting_explo.html

Really interesting article on the difficulties involved with explosive detection at airport security checkpoints.

Abstract: The mid-air bombing of a Somali passenger jet in February was a wake-up call for security agencies and those working in the field of explosive detection. It was also a reminder that terrorist groups from Yemen to Syria to East Africa continue to explore innovative ways to get bombs onto passenger jets by trying to beat detection systems or recruit insiders. The layered state-of-the-art detection systems that are now in place at most airports in the developed world make it very hard for terrorists to sneak bombs onto planes, but the international aviation sector remains vulnerable because many airports in the developing world either have not deployed these technologies or have not provided rigorous training for operators. Technologies and security measures will need to improve to stay one step ahead of innovative terrorists. Given the pattern of recent Islamic State attacks, there is a strong argument for extending state-of-the-art explosive detection systems beyond the aviation sector to locations such as sports arenas and music venues.

I disagree with his conclusions — the last sentence above — but the technical information on explosives detection technology is really interesting.

Security Risks of Shortened URLs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/04/security_risks_11.html

Shortened URLs, produced by services like bit.ly and goo.gl, can be brute-forced. And searching random shortened URLs yields all sorts of secret documents. Plus, many of them can be edited, and can be infected with malware.

Academic paper. Blog post with lots of detail.

IRS Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/04/irs_security.html

Monday is Tax Day. Many of us are thinking about our taxes. Are they too high or too low? What’s our money being spent on? Do we have a government worth paying for? I’m not here to answer any of those questions — I’m here to give you something else to think about. In addition to sending the IRS your money, you’re also sending them your data.

It’s a lot of highly personal financial data, so it’s sensitive and important information.

Is that data secure?

The short answer is “no.” Every year, the GAO — Government Accountability Office — reviews IRS security and issues a report. The title of this year’s report kind of says it all: “IRS Needs to Further Improve Controls over Financial and Taxpayer Data.” The details are ugly: failures in identification and authentication of network users, failures to encrypt data, failures in audit and monitoring and failures to patch vulnerabilities and update software.

To be fair, the GAO can sometimes be pedantic in its evaluations. And the 43 recommendations for the IRS to improve security aren’t being made public, so as not to advertise our vulnerabilities to the bad guys. But this is all pretty basic stuff, and it’s embarrassing.

>More importantly, this lack of security is dangerous. We know that cybercriminals are using our financial information to commit fraud. Specifically, they’re using our personal tax information to file for tax refunds in our name to fraudulently collect the refunds.

We know that foreign governments are targeting U.S. government networks for personal information on U.S. citizens: Remember the OPM data theft that was made public last year in which a federal personnel database with records on 21.5 million people was stolen?

There have been some stories of hacks against IRS databases in the past. I think that the IRS has been hacked even more than is publicly reported, either because the government is keeping the attacks secret or because it doesn’t even realize it’s been attacked.

So what happens next?

If the past is any guide, not a lot. The GAO has been warning about problems with IRS security since it started writing these reports in 2007. In each report, the GAO has issued recommendations for the IRS to improve security. After each report, the IRS did a few of those things, but ignored most of the recommendations. In this year’s report, for example, the GAO complained that the IRS ignored 47 of its 70 recommendations from 2015. In its 2015 report, it complained that the IRS only mitigated 14 of the 69 weaknesses it identified in 2013. The 2012 report didn’t paint IRS security in any better light.

If I had to guess, I’d say the IRS’s security is this bad for the exact same reason that so much corporate network-security is so bad: lack of budget. It’s not uncommon for companies to skimp on their security budget. The budget at the IRS has been cut 17% since 2010; I am certain IT security was not exempt from those cuts.

So we’re stuck. We have no choice but to give the IRS our data. The IRS isn’t doing a good job securing our data. Congress isn’t giving the IRS enough budget to do a good job securing our data. Last Tuesday, the Senate Finance Committee urged the IRS to improve its security. We all need to urge Congress to give it the money to do so.

Nothing is absolutely hacker-proof, but there are a lot of security improvements the IRS can make. If we have to give the IRS all our information — and we do — we deserve to have it taken care of properly.

This essay previously appeared on CNN.com.

Smart Essay on the Limitations of Anti-Terrorism Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/04/smart_essay_on_.html

This is good:

Threats constantly change, yet our political discourse suggests that our vulnerabilities are simply for lack of resources, commitment or competence. Sometimes, that is true. But mostly we are vulnerable because we choose to be; because we’ve accepted, at least implicitly, that some risk is tolerable. A state that could stop every suicide bomber wouldn’t be a free or, let’s face it, fun one.

We will simply never get to maximum defensive posture. Regardless of political affiliation, Americans wouldn’t tolerate the delay or intrusion of an urban mass-transit system that required bag checks and pat-downs. After the 2013 Boston Marathon bombing, many wondered how to make the race safe the next year. A heavier police presence helps, but the only truly safe way to host a marathon is to not have one at all. The risks we tolerate, then, are not necessarily bad bargains simply because an enemy can exploit them.

No matter what promises are made on the campaign trail, terrorism will never be vanquished. There is no ideology, no surveillance, no wall that will definitely stop some 24-year-old from becoming radicalized on the Web, gaining access to guns and shooting a soft target. When we don’t admit this to ourselves, we often swing between the extremes of putting our heads in the sand or losing them entirely.

I am reminded of my own 2006 “Refuse to be Terrorized” essay.

Smart Essay on the Limitations of Anti-Terrorism Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/04/smart_essay_on_.html

This is good:

Threats constantly change, yet our political discourse suggests that our vulnerabilities are simply for lack of resources, commitment or competence. Sometimes, that is true. But mostly we are vulnerable because we choose to be; because we’ve accepted, at least implicitly, that some risk is tolerable. A state that could stop every suicide bomber wouldn’t be a free or, let’s face it, fun one.

We will simply never get to maximum defensive posture. Regardless of political affiliation, Americans wouldn’t tolerate the delay or intrusion of an urban mass-transit system that required bag checks and pat-downs. After the 2013 Boston Marathon bombing, many wondered how to make the race safe the next year. A heavier police presence helps, but the only truly safe way to host a marathon is to not have one at all. The risks we tolerate, then, are not necessarily bad bargains simply because an enemy can exploit them.

No matter what promises are made on the campaign trail, terrorism will never be vanquished. There is no ideology, no surveillance, no wall that will definitely stop some 24-year-old from becoming radicalized on the Web, gaining access to guns and shooting a soft target. When we don’t admit this to ourselves, we often swing between the extremes of putting our heads in the sand or losing them entirely.

I am reminded of my own 2006 “Refuse to be Terrorized” essay.