Tag Archives: securitypolicies

Securing the International IoT Supply Chain

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/securing_the_in_1.html

Together with Nate Kim (former student) and Trey Herr (Atlantic Council Cyber Statecraft Initiative), I have written a paper on IoT supply chain security. The basic problem we try to solve is: how to you enforce IoT security regulations when most of the stuff is made in other countries? And our solution is: enforce the regulations on the domestic company that’s selling the stuff to consumers. There’s a lot of detail between here and there, though, and it’s all in the paper.

We also wrote a Lawfare post:

…we propose to leverage these supply chains as part of the solution. Selling to U.S. consumers generally requires that IoT manufacturers sell through a U.S. subsidiary or, more commonly, a domestic distributor like Best Buy or Amazon. The Federal Trade Commission can apply regulatory pressure to this distributor to sell only products that meet the requirements of a security framework developed by U.S. cybersecurity agencies. That would put pressure on manufacturers to make sure their products are compliant with the standards set out in this security framework, including pressuring their component vendors and original device manufacturers to make sure they supply parts that meet the recognized security framework.

News article.

Zoom’s Commitment to User Security Depends on Whether you Pay It or Not

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/zooms_commitmen.html

Zoom was doing so well…. And now we have this:

Corporate clients will get access to Zoom’s end-to-end encryption service now being developed, but Yuan said free users won’t enjoy that level of privacy, which makes it impossible for third parties to decipher communications.

“Free users for sure we don’t want to give that because we also want to work together with FBI, with local law enforcement in case some people use Zoom for a bad purpose,” Yuan said on the call.

This is just dumb. Imagine the scene in the terrorist/drug kingpin/money launderer hideout: “I’m sorry, boss. We could have have strong encryption to secure our bad intentions from the FBI, but we can’t afford the $20.” This decision will only affect protesters and dissidents and human rights workers and journalists.

Here’s advisor Alex Stamos doing damage control:

Nico, it’s incorrect to say that free calls won’t be encrypted and this turns out to be a really difficult balancing act between different kinds of harms. More details here:

Some facts on Zoom’s current plans for E2E encryption, which are complicated by the product requirements for an enterprise conferencing product and some legitimate safety issues. The E2E design is available here: https://github.com/zoom/zoom-e2e-whitepaper/blob/master/zoom_e2e.pdf

I read that document, and it doesn’t explain why end-to-end encryption is only available to paying customers. And note that Stamos said “encrypted” and not “end-to-end encrypted.” He knows the difference.

Anyway, people were rightly incensed by his remarks. And yesterday, Yuan tried to clarify:

Yuan sought to assuage users’ concerns Wednesday in his weekly webinar, saying the company was striving to “do the right thing” for vulnerable groups, including children and hate-crime victims, whose abuse is sometimes broadcast through Zoom’s platform.

“We plan to provide end-to-end encryption to users for whom we can verify identity, thereby limiting harm to vulnerable groups,” he said. “I wanted to clarify that Zoom does not monitor meeting content. We do not have backdoors where participants, including Zoom employees or law enforcement, can enter meetings without being visible to others. None of this will change.”

Notice that is specifically did not say that he was offering end-to-end encryption to users of the free platform. Only to “users we can verify identity,” which I’m guessing means users that give him a credit card number.

The Twitter feed was similarly sloppily evasive:

We are seeing some misunderstandings on Twitter today around our encryption. We want to provide these facts.

Zoom does not provide information to law enforcement except in circumstances such as child sexual abuse.

Zoom does not proactively monitor meeting content.

Zoom does no have backdoors where Zoom or others can enter meetings without being visible to participants.

AES 256 GCM encryption is turned on for all Zoom users — free and paid.

Those facts have nothing to do with any “misunderstanding.” That was about end-to-end encryption, which the statement very specifically left out of that last sentence. The corporate communications have been clear and consistent.

Come on, Zoom. You were doing so well. Of course you should offer premium features to paying customers, but please don’t include security and privacy in those premium features. They should be available to everyone.

And, hey, this is kind of a dumb time to side with the police over protesters.

I have emailed the CEO, and will report back if I hear back. But for now, assume that the free version of Zoom will not support end-to-end encryption.

EDITED TO ADD (6/4): Another article.

EDITED TO ADD (6/4): I understand that this is complicated, both technically and politically. (Note, though, Jitsi is doing it.) And, yes, lots of people confused end-to-end encryption with link encryption. (My readers tend to be more sophisticated than that.) My worry that the “we’ll offer end-to-end encryption only to paying customers we can verify, even though there’s plenty of evidence that ‘bad purpose’ people will just get paid accounts” story plays into the dangerous narrative that encryption itself is dangerous when widely available. And I disagree with the notion that the possibility of child exploitation is a valid reason to deny security to large groups of people.

Matthew Green on this issue. An excerpt:

Once the precedent is set that E2E encryption is too “dangerous” to hand to the masses, the genie is out of the bottle. And once corporate America accepts that private communications are too politically risky to deploy, it’s going to be hard to put it back.

From Signal:

Want to help us work on end-to-end encrypted group video calling functionality that will be free for everyone? Zoom on over to our careers page….

Security of Health Information

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/security_of_hea.html

The world is racing to contain the new COVID-19 virus that is spreading around the globe with alarming speed. Right now, pandemic disease experts at the World Health Organization (WHO), the US Centers for Disease Control and Prevention (CDC), and other public-health agencies are gathering information to learn how and where the virus is spreading. To do so, they are using a variety of digital communications and surveillance systems. Like much of the medical infrastructure, these systems are highly vulnerable to hacking and interference.

That vulnerability should be deeply concerning. Governments and intelligence agencies have long had an interest in manipulating health information, both in their own countries and abroad. They might do so to prevent mass panic, avert damage to their economies, or avoid public discontent (if officials made grave mistakes in containing an outbreak, for example). Outside their borders, states might use disinformation to undermine their adversaries or disrupt an alliance between other nations. A sudden epidemic­ — when countries struggle to manage not just the outbreak but its social, economic, and political fallout­ — is especially tempting for interference.

In the case of COVID-19, such interference is already well underway. That fact should not come as a surprise. States hostile to the West have a long track record of manipulating information about health issues to sow distrust. In the 1980s, for example, the Soviet Union spread the false story that the US Department of Defense bioengineered HIV in order to kill African Americans. This propaganda was effective: some 20 years after the original Soviet disinformation campaign, a 2005 survey found that 48 percent of African Americans believed HIV was concocted in a laboratory, and 15 percent thought it was a tool of genocide aimed at their communities.

More recently, in 2018, Russia undertook an extensive disinformation campaign to amplify the anti-vaccination movement using social media platforms like Twitter and Facebook. Researchers have confirmed that Russian trolls and bots tweeted anti-vaccination messages at up to 22 times the rate of average users. Exposure to these messages, other researchers found, significantly decreased vaccine uptake, endangering individual lives and public health.

Last week, US officials accused Russia of spreading disinformation about COVID-19 in yet another coordinated campaign. Beginning around the middle of January, thousands of Twitter, Facebook, and Instagram accounts­ — many of which had previously been tied to Russia­ — had been seen posting nearly identical messages in English, German, French, and other languages, blaming the United States for the outbreak. Some of the messages claimed that the virus is part of a US effort to wage economic war on China, others that it is a biological weapon engineered by the CIA.

As much as this disinformation can sow discord and undermine public trust, the far greater vulnerability lies in the United States’ poorly protected emergency-response infrastructure, including the health surveillance systems used to monitor and track the epidemic. By hacking these systems and corrupting medical data, states with formidable cybercapabilities can change and manipulate data right at the source.

Here is how it would work, and why we should be so concerned. Numerous health surveillance systems are monitoring the spread of COVID-19 cases, including the CDC’s influenza surveillance network. Almost all testing is done at a local or regional level, with public-health agencies like the CDC only compiling and analyzing the data. Only rarely is an actual biological sample sent to a high-level government lab. Many of the clinics and labs providing results to the CDC no longer file reports as in the past, but have several layers of software to store and transmit the data.

Potential vulnerabilities in these systems are legion: hackers exploiting bugs in the software, unauthorized access to a lab’s servers by some other route, or interference with the digital communications between the labs and the CDC. That the software involved in disease tracking sometimes has access to electronic medical records is particularly concerning, because those records are often integrated into a clinic or hospital’s network of digital devices. One such device connected to a single hospital’s network could, in theory, be used to hack into the CDC’s entire COVID-19 database.

In practice, hacking deep into a hospital’s systems can be shockingly easy. As part of a cybersecurity study, Israeli researchers at Ben-Gurion University were able to hack into a hospital’s network via the public Wi-Fi system. Once inside, they could move through most of the hospital’s databases and diagnostic systems. Gaining control of the hospital’s unencrypted image database, the researchers inserted malware that altered healthy patients’ CT scans to show nonexistent tumors. Radiologists reading these images could only distinguish real from altered CTs 60 percent of the time­ — and only after being alerted that some of the CTs had been manipulated.

Another study directly relevant to public-health emergencies showed that a critical US biosecurity initiative, the Department of Homeland Security’s BioWatch program, had been left vulnerable to cyberattackers for over a decade. This program monitors more than 30 US jurisdictions and allows health officials to rapidly detect a bioweapons attack. Hacking this program could cover up an attack, or fool authorities into believing one has occurred.

Fortunately, no case of healthcare sabotage by intelligence agencies or hackers has come to light (the closest has been a series of ransomware attacks extorting money from hospitals, causing significant data breaches and interruptions in medical services). But other critical infrastructure has often been a target. The Russians have repeatedly hacked Ukraine’s national power grid, and have been probing US power plants and grid infrastructure as well. The United States and Israel hacked the Iranian nuclear program, while Iran has targeted Saudi Arabia’s oil infrastructure. There is no reason to believe that public-health infrastructure is in any way off limits.

Despite these precedents and proven risks, a detailed assessment of the vulnerability of US health surveillance systems to infiltration and manipulation has yet to be made. With COVID-19 on the verge of becoming a pandemic, the United States is at risk of not having trustworthy data, which in turn could cripple our country’s ability to respond.

Under normal conditions, there is plenty of time for health officials to notice unusual patterns in the data and track down wrong information­ — if necessary, using the old-fashioned method of giving the lab a call. But during an epidemic, when there are tens of thousands of cases to track and analyze, it would be easy for exhausted disease experts and public-health officials to be misled by corrupted data. The resulting confusion could lead to misdirected resources, give false reassurance that case numbers are falling, or waste precious time as decision makers try to validate inconsistent data.

In the face of a possible global pandemic, US and international public-health leaders must lose no time assessing and strengthening the security of the country’s digital health systems. They also have an important role to play in the broader debate over cybersecurity. Making America’s health infrastructure safe requires a fundamental reorientation of cybersecurity away from offense and toward defense. The position of many governments, including the United States’, that Internet infrastructure must be kept vulnerable so they can better spy on others, is no longer tenable. A digital arms race, in which more countries acquire ever more sophisticated cyberattack capabilities, only increases US vulnerability in critical areas such as pandemic control. By highlighting the importance of protecting digital health infrastructure, public-health leaders can and should call for a well-defended and peaceful Internet as a foundation for a healthy and secure world.

This essay was co-authored with Margaret Bourdeaux; a slightly different version appeared in Foreign Policy.

EDITED TO ADD: On last week’s squid post, there was a big conversation regarding the COVID-19. Many of the comments straddled the line between what are and aren’t the the core topics. Yesterday I deleted a bunch for being off-topic. Then I reconsidered and republished some of what I deleted.

Going forward, comments about the COVID-19 will be restricted to the security and risk implications of the virus. This includes cybersecurity, security, risk management, surveillance, and containment measures. Comments that stray off those topics will be removed. By clarifying this, I hope to keep the conversation on-topic while also allowing discussion of the security implications of current events.

Thank you for your patience and forbearance on this.

Access Now Is Looking for a Chief Security Officer

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/access_now_is_l.html

The international digital human rights organization Access Now (I am on the board) is looking to hire a Chief Security Officer.

I believe that, somewhere, there is a highly qualified security person who has had enough of corporate life and wants instead to make a difference in the world. If that’s you, please consider applying.

The US National Cyber Strategy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/the_us_national.html

Last month, the White House released the “National Cyber Strategy of the United States of America. I generally don’t have much to say about these sorts of documents. They’re filled with broad generalities. Who can argue with:

Defend the homeland by protecting networks, systems, functions, and data;

Promote American prosperity by nurturing a secure, thriving digital economy and fostering strong domestic innovation;

Preserve peace and security by strengthening the ability of the United States in concert with allies and partners ­ to deter and, if necessary, punish those who use cyber tools for malicious purposes; and

Expand American influence abroad to extend the key tenets of an open, interoperable, reliable, and secure Internet.

The devil is in the details, of course. And the strategy includes no details.

In a New York Times op-ed, Josephine Wolff argues that this new strategy, together with the more-detailed Department of Defense cyber strategy and the classified National Security Presidential Memorandum 13, represent a dangerous shift of US cybersecurity posture from defensive to offensive:

…the National Cyber Strategy represents an abrupt and reckless shift in how the United States government engages with adversaries online. Instead of continuing to focus on strengthening defensive technologies and minimizing the impact of security breaches, the Trump administration plans to ramp up offensive cyberoperations. The new goal: deter adversaries through pre-emptive cyberattacks and make other nations fear our retaliatory powers.

[…]

The Trump administration’s shift to an offensive approach is designed to escalate cyber conflicts, and that escalation could be dangerous. Not only will it detract resources and attention from the more pressing issues of defense and risk management, but it will also encourage the government to act recklessly in directing cyberattacks at targets before they can be certain of who those targets are and what they are doing.

[…]

There is no evidence that pre-emptive cyberattacks will serve as effective deterrents to our adversaries in cyberspace. In fact, every time a country has initiated an unprompted cyberattack, it has invariably led to more conflict and has encouraged retaliatory breaches rather than deterring them. Nearly every major publicly known online intrusion that Russia or North Korea has perpetrated against the United States has had significant and unpleasant consequences.

Wolff is right; this is reckless. In Click Here to Kill Everybody, I argue for a “defense dominant” strategy: that while offense is essential for defense, when the two are in conflict, it should take a back seat to defense. It’s more complicated than that, of course, and I devote a whole chapter to its implications. But as computers and the Internet become more critical to our lives and society, keeping them secure becomes more important than using them to attack others.

Five-Eyes Intelligence Services Choose Surveillance Over Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/09/five-eyes_intel.html

The Five Eyes — the intelligence consortium of the rich English-speaking countries (the US, Canada, the UK, Australia, and New Zealand) — have issued a “Statement of Principles on Access to Evidence and Encryption” where they claim their needs for surveillance outweigh everyone’s needs for security and privacy.

…the increasing use and sophistication of certain encryption designs present challenges for nations in combatting serious crimes and threats to national and global security. Many of the same means of encryption that are being used to protect personal, commercial and government information are also being used by criminals, including child sex offenders, terrorists and organized crime groups to frustrate investigations and avoid detection and prosecution.

Privacy laws must prevent arbitrary or unlawful interference, but privacy is not absolute. It is an established principle that appropriate government authorities should be able to seek access to otherwise private information when a court or independent authority has authorized such access based on established legal standards. The same principles have long permitted government authorities to search homes, vehicles, and personal effects with valid legal authority.

The increasing gap between the ability of law enforcement to lawfully access data and their ability to acquire and use the content of that data is a pressing international concern that requires urgent, sustained attention and informed discussion on the complexity of the issues and interests at stake. Otherwise, court decisions about legitimate access to data are increasingly rendered meaningless, threatening to undermine the systems of justice established in our democratic nations.

To put it bluntly, this is reckless and shortsighted. I’ve repeatedly written about why this can’t be done technically, and why trying results in insecurity. But there’s a greater principle at first: we need to decide, as nations and as society, to put defense first. We need a “defense dominant” strategy for securing the Internet and everything attached to it.

This is important. Our national security depends on the security of our technologies. Demanding that technology companies add backdoors to computers and communications systems puts us all at risk. We need to understand that these systems are too critical to our society and — now that they can affect the world in a direct physical manner — affect our lives and property as well.

This is what I just wrote, in Click Here to Kill Everybody:

There is simply no way to secure US networks while at the same time leaving foreign networks open to eavesdropping and attack. There’s no way to secure our phones and computers from criminals and terrorists without also securing the phones and computers of those criminals and terrorists. On the generalized worldwide network that is the Internet, anything we do to secure its hardware and software secures it everywhere in the world. And everything we do to keep it insecure similarly affects the entire world.

This leaves us with a choice: either we secure our stuff, and as a side effect also secure their stuff; or we keep their stuff vulnerable, and as a side effect keep our own stuff vulnerable. It’s actually not a hard choice. An analogy might bring this point home. Imagine that every house could be opened with a master key, and this was known to the criminals. Fixing those locks would also mean that criminals’ safe houses would be more secure, but it’s pretty clear that this downside would be worth the trade-off of protecting everyone’s house. With the Internet+ increasing the risks from insecurity dramatically, the choice is even more obvious. We must secure the information systems used by our elected officials, our critical infrastructure providers, and our businesses.

Yes, increasing our security will make it harder for us to eavesdrop, and attack, our enemies in cyberspace. (It won’t make it impossible for law enforcement to solve crimes; I’ll get to that later in this chapter.) Regardless, it’s worth it. If we are ever going to secure the Internet+, we need to prioritize defense over offense in all of its aspects. We’ve got more to lose through our Internet+ vulnerabilities than our adversaries do, and more to gain through Internet+ security. We need to recognize that the security benefits of a secure Internet+ greatly outweigh the security benefits of a vulnerable one.

We need to have this debate at the level of national security. Putting spy agencies in charge of this trade-off is wrong, and will result in bad decisions.

Cory Doctorow has a good reaction.

Slashdot post.

Russia is Banning Telegram

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/russia_is_banni.html

Russia has banned the secure messaging app Telegram. It’s making an absolute mess of the ban — blocking 16 million IP addresses, many belonging to the Amazon and Google clouds — and it’s not even clear that it’s working. But, more importantly, I’m not convinced Telegram is secure in the first place.

Such a weird story. If you want secure messaging, use Signal. If you’re concerned that having Signal on your phone will itself arouse suspicion, use WhatsApp.

E-Mailing Private HTTPS Keys

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/e-mailing_priva.html

I don’t know what to make of this story:

The email was sent on Tuesday by the CEO of Trustico, a UK-based reseller of TLS certificates issued by the browser-trusted certificate authorities Comodo and, until recently, Symantec. It was sent to Jeremy Rowley, an executive vice president at DigiCert, a certificate authority that acquired Symantec’s certificate issuance business after Symantec was caught flouting binding industry rules, prompting Google to distrust Symantec certificates in its Chrome browser. In communications earlier this month, Trustico notified DigiCert that 50,000 Symantec-issued certificates Trustico had resold should be mass revoked because of security concerns.

When Rowley asked for proof the certificates were compromised, the Trustico CEO emailed the private keys of 23,000 certificates, according to an account posted to a Mozilla security policy forum. The report produced a collective gasp among many security practitioners who said it demonstrated a shockingly cavalier treatment of the digital certificates that form one of the most basic foundations of website security.

Generally speaking, private keys for TLS certificates should never be archived by resellers, and, even in the rare cases where such storage is permissible, they should be tightly safeguarded. A CEO being able to attach the keys for 23,000 certificates to an email raises troubling concerns that those types of best practices weren’t followed.

I am croggled by the multiple layers of insecurity here.

BoingBoing post.

The TSA’s Selective Laptop Ban

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/the_tsas_select.html

Last Monday, the TSA announced a peculiar new security measure to take effect within 96 hours. Passengers flying into the US on foreign airlines from eight Muslim countries would be prohibited from carrying aboard any electronics larger than a smartphone. They would have to be checked and put into the cargo hold. And now the UK is following suit.

It’s difficult to make sense of this as a security measure, particularly at a time when many people question the veracity of government orders, but other explanations are either unsatisfying or damning.

So let’s look at the security aspects of this first. Laptop computers aren’t inherently dangerous, but they’re convenient carrying boxes. This is why, in the past, TSA officials have demanded passengers turn their laptops on: to confirm that they’re actually laptops and not laptop cases emptied of their electronics and then filled with explosives.

Forcing a would-be bomber to put larger laptops in the plane’s hold is a reasonable defense against this threat, because it increases the complexity of the plot. Both the shoe-bomber Richard Reid and the underwear bomber Umar Farouk Abdulmutallab carried crude bombs aboard their planes with the plan to set them off manually once aloft. Setting off a bomb in checked baggage is more work, which is why we don’t see more midair explosions like Pan Am Flight 103 over Lockerbie, Scotland, in 1988.

Security measures that restrict what passengers can carry onto planes are not unprecedented either. Airport security regularly responds to both actual attacks and intelligence regarding future attacks. After the liquid bombers were captured in 2006, the British banned all carry-on luggage except passports and wallets. I remember talking with a friend who traveled home from London with his daughters in those early weeks of the ban. They reported that airport security officials confiscated every tube of lip balm they tried to hide.

Similarly, the US started checking shoes after Reid, installed full-body scanners after Abdulmutallab and restricted liquids in 2006. But all of those measure were global, and most lessened in severity as the threat diminished.

This current restriction implies some specific intelligence of a laptop-based plot and a temporary ban to address it. However, if that’s the case, why only certain non-US carriers? And why only certain airports? Terrorists are smart enough to put a laptop bomb in checked baggage from the Middle East to Europe and then carry it on from Europe to the US.

Why not require passengers to turn their laptops on as they go through security? That would be a more effective security measure than forcing them to check them in their luggage. And lastly, why is there a delay between the ban being announced and it taking effect?

Even more confusing, the New York Times reported that “officials called the directive an attempt to address gaps in foreign airport security, and said it was not based on any specific or credible threat of an imminent attack.” The Department of Homeland Security FAQ page makes this general statement, “Yes, intelligence is one aspect of every security-related decision,” but doesn’t provide a specific security threat. And yet a report from the UK states the ban “follows the receipt of specific intelligence reports.”

Of course, the details are all classified, which leaves all of us security experts scratching our heads. On the face of it, the ban makes little sense.

One analysis painted this as a protectionist measure targeted at the heavily subsidized Middle Eastern airlines by hitting them where it hurts the most: high-paying business class travelers who need their laptops with them on planes to get work done. That reasoning makes more sense than any security-related explanation, but doesn’t explain why the British extended the ban to UK carriers as well. Or why this measure won’t backfire when those Middle Eastern countries turn around and ban laptops on American carriers in retaliation. And one aviation official told CNN that an intelligence official informed him it was not a “political move.”

In the end, national security measures based on secret information require us to trust the government. That trust is at historic low levels right now, so people both in the US and other countries are rightly skeptical of the official unsatisfying explanations. The new laptop ban highlights this mistrust.

This essay previously appeared on CNN.com.

EDITED TO ADD: Here are two essays that look at the possible political motivations, and fallout, of this ban. And the EFF rightly points out that letting a laptop out of your hands and sight is itself a security risk — for the passenger.

Photocopier Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/01/photocopier_sec.html

A modern photocopier is basically a computer with a scanner and printer attached. This computer has a hard drive, and scans of images are regularly stored on that drive. This means that when a photocopier is thrown away, that hard drive is filled with pages that the machine copied over its lifetime. As you might expect, some of those pages will contain sensitive information.

This 2011 report was written by the Inspector General of the National Archives and Records Administration (NARA). It found that the organization did nothing to safeguard its photocopiers.

Our audit found that opportunities exist to strengthen controls to ensure photocopier hard drives are protected from potential exposure. Specifically, we found the following weaknesses.

  • NARA lacks appropriate controls to ensure all photocopiers across the agency are accounted for and that any hard drives residing on these machines are tracked and properly sanitized or destroyed prior to disposal.
  • There are no policies documenting security measures to be taken for photocopiers utilized for general use nor are there procedures to ensure photocopier hard drives are sanitized or destroyed prior to disposal or at the end of the lease term.

  • Photocopier lease agreements and contracts do not include a “keep disk”1 or similar clause as required by NARA’s IT Security Methodology for Media Protection Policy version 5.1.

I don’t mean to single this organization out. Pretty much no one thinks about this security threat.

Guessing Credit Card Security Details

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/12/guessing_credit.html

Researchers have found that they can guess various credit-card-number security details by spreading their guesses around multiple websites so as not to trigger any alarms.

From a news article:

Mohammed Ali, a PhD student at the university’s School of Computing Science, said: “This sort of attack exploits two weaknesses that on their own are not too severe but when used together, present a serious risk to the whole payment system.

“Firstly, the current online payment system does not detect multiple invalid payment requests from different websites.

“This allows unlimited guesses on each card data field, using up to the allowed number of attempts — typically 10 or 20 guesses — on each website.

“Secondly, different websites ask for different variations in the card data fields to validate an online purchase. This means it’s quite easy to build up the information and piece it together like a jigsaw.

“The unlimited guesses, when combined with the variations in the payment data fields make it frighteningly easy for attackers to generate all the card details one field at a time.

“Each generated card field can be used in succession to generate the next field and so on. If the hits are spread across enough websites then a positive response to each question can be received within two seconds — just like any online payment.

“So even starting with no details at all other than the first six digits — which tell you the bank and card type and so are the same for every card from a single provider — a hacker can obtain the three essential pieces of information to make an online purchase within as little as six seconds.”

That’s card number, expiration date, and CVV code.

From the paper:

Abstract: This article provides an extensive study of the current practice of online payment using credit and debit cards, and the intrinsic security challenges caused by the differences in how payment sites operate. We investigated the Alexa top-400 online merchants’ payment sites, and realised that the current landscape facilitates a distributed guessing attack. This attack subverts the payment functionality from its intended purpose of validating card details, into helping the attackers to generate all security data fields required to make online transactions. We will show that this attack would not be practical if all payment sites performed the same security checks. As part of our responsible disclosure measure, we notified a selection of payment sites about our findings, and we report on their responses. We will discuss potential solutions to the problem and the practical difficulty to implement these, given the varying technical and business concerns of the involved parties.

BoingBoing post:

The researchers believe this method has already been used in the wild, as part of a spectacular hack against Tesco bank last month.

MasterCard is immune to this hack because they detect the guesses, even though they’re distributed across multiple websites. Visa is not.

How Different Stakeholders Frame Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/how_different_s.html

Josephine Wolff examines different Internet governance stakeholders and how they frame security debates.

Her conclusion:

The tensions that arise around issues of security among different groups of internet governance stakeholders speak to the many tangled notions of what online security is and whom it is meant to protect that are espoused by the participants in multistakeholder governance forums. What makes these debates significant and unique in the context of internet governance is not that the different stakeholders often disagree (indeed, that is a common occurrence), but rather that they disagree while all using the same vocabulary of security to support their respective stances. Government stakeholders advocate for limitations on WHOIS privacy/proxy services in order to aid law enforcement and protect their citizens from crime and fraud. Civil society stakeholders advocate against those limitations in order to aid activists and minorities and protect those online users from harassment. Both sides would claim that their position promotes a more secure internet and a more secure society — ­and in a sense, both would be right, except that each promotes a differently secure internet and society, protecting different classes of people and behaviour from different threats.

While vague notions of security may be sufficiently universally accepted as to appear in official documents and treaties, the specific details of individual decisions­ — such as the implementation of dotless domains, changes to the WHOIS database privacy policy, and proposals to grant government greater authority over how their internet traffic is routed­ — require stakeholders to disentangle the many different ideas embedded in that language. For the idea of security to truly foster cooperation and collaboration as a boundary object in internet governance circles, the participating stakeholders will have to more concretely agree on what their vision of a secure internet is and how it will balance the different ideas of security espoused by different groups. Alternatively, internet governance stakeholders may find it more useful to limit their discussions on security, as a whole, and try to force their discussions to focus on more specific threats and issues within that space as a means of preventing themselves from succumbing to a façade of agreement without grappling with the sources of disagreement that linger just below the surface.

The intersection of multistakeholder internet governance and definitional issues of security is striking because of the way that the multistakeholder model both reinforces and takes advantage of the ambiguity surrounding the idea of security explored in the security studies literature. That ambiguity is a crucial component of maintaining a functional multistakeholder model of governance because it lends itself well to high-level agreements and discussions, contributing to the sense of consensus building across stakeholders. At the same time, gathering those different stakeholders together to decide specific issues related to the internet and its infrastructure brings to a fore the vast variety of definitions of security they employ and forces them to engage in security-versus-security fights, with each trying to promote their own particular notion of security. Security has long been a contested concept, but rarely do these contestations play out as directly and dramatically as in the multistakeholder arena of internet governance, where all parties are able to face off on what really constitutes security in a digital world.

We certainly saw this in the “going dark” debate: e.g. the FBI vs. Apple and their iPhone security.

The Hacking of Yahoo

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/the_hacking_of_.html

Last week, Yahoo! announced that it was hacked pretty massively in 2014. Over half a billion usernames and passwords were affected, making this the largest data breach of all time.

Yahoo! claimed it was a government that did it:

A recent investigation by Yahoo! Inc. has confirmed that a copy of certain user account information was stolen from the company’s network in late 2014 by what it believes is a state-sponsored actor.

I did a bunch of press interviews after the hack, and repeatedly said that “state-sponsored actor” is often code for “please don’t blame us for our shoddy security because it was a really sophisticated attacker and we can’t be expected to defend ourselves against that.”

Well, it turns out that Yahoo! had shoddy security and it was a bunch of criminals that hacked them. The first story is from the New York Times, and outlines the many ways Yahoo! ignored security issues.

But when it came time to commit meaningful dollars to improve Yahoo’s security infrastructure, Ms. Mayer repeatedly clashed with Mr. Stamos, according to the current and former employees. She denied Yahoo’s security team financial resources and put off proactive security defenses, including intrusion-detection mechanisms for Yahoo’s production systems.

The second story is from the Wall Street Journal:

InfoArmor said the hackers, whom it calls “Group E,” have sold the entire Yahoo database at least three times, including one sale to a state-sponsored actor. But the hackers are engaged in a moneymaking enterprise and have “a significant criminal track record,” selling data to other criminals for spam or to affiliate marketers who aren’t acting on behalf of any government, said Andrew Komarov, chief intelligence officer with InfoArmor Inc.

That is not the profile of a state-sponsored hacker, Mr. Komarov said. “We don’t see any reason to say that it’s state sponsored,” he said. “Their clients are state sponsored, but not the actual hackers.”

Research on the Timing of Security Warnings

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/research_on_the_2.html

fMRI experiments show that we are more likely to ignore security warnings when they interrupt other tasks.

A new study from BYU, in collaboration with Google Chrome engineers, finds the status quo of warning messages appearing haphazardly­ — while people are typing, watching a video, uploading files, etc.­ — results in up to 90 percent of users disregarding them.

Researchers found these times are less effective because of “dual task interference,” a neural limitation where even simple tasks can’t be simultaneously performed without significant performance loss. Or, in human terms, multitasking.

“We found that the brain can’t handle multitasking very well,” said study coauthor and BYU information systems professor Anthony Vance. “Software developers categorically present these messages without any regard to what the user is doing. They interrupt us constantly and our research shows there’s a high penalty that comes by presenting these messages at random times.”

[…]

For part of the study, researchers had participants complete computer tasks while an fMRI scanner measured their brain activity. The experiment showed neural activity was substantially reduced when security messages interrupted a task, as compared to when a user responded to the security message itself.

The BYU researchers used the functional MRI data as they collaborated with a team of Google Chrome security engineers to identify better times to display security messages during the browsing experience.

Research paper. News article.

Frequent Password Changes Is a Bad Security Idea

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/frequent_passwo.html

I’ve been saying for years that it’s bad security advice, that it encourages poor passwords. Lorrie Cranor, now the FTC’s chief technologist, agrees:

By studying the data, the researchers identified common techniques account holders used when they were required to change passwords. A password like “tarheels#1”, for instance (excluding the quotation marks) frequently became “tArheels#1” after the first change, “taRheels#1” on the second change and so on. Or it might be changed to “tarheels#11” on the first change and “tarheels#111” on the second. Another common technique was to substitute a digit to make it “tarheels#2”, “tarheels#3”, and so on.

“The UNC researchers said if people have to change their passwords every 90 days, they tend to use a pattern and they do what we call a transformation,” Cranor explained. “They take their old passwords, they change it in some small way, and they come up with a new password.”

The researchers used the transformations they uncovered to develop algorithms that were able to predict changes with great accuracy. Then they simulated real-world cracking to see how well they performed. In online attacks, in which attackers try to make as many guesses as possible before the targeted network locks them out, the algorithm cracked 17 percent of the accounts in fewer than five attempts. In offline attacks performed on the recovered hashes using superfast computers, 41 percent of the changed passwords were cracked within three seconds.

That data refers to this study.

My advice for choosing a secure password is here.