Tag Archives: cyberweapons

Zero-Click iPhone Exploits

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/09/zero-click-iphone-exploits.html

Citizen Lab is reporting on two zero-click iMessage exploits, in spyware sold by the cyberweapons arms manufacturer NSO Group to the Bahraini government.

These are particularly scary exploits, since they don’t require to victim to do anything, like click on a link or open a file. The victim receives a text message, and then they are hacked.

More on this here.

Paragon: Yet Another Cyberweapons Arms Manufacturer

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/paragon-yet-another-cyberweapons-arms-manufacturer.html

Forbes has the story:

Paragon’s product will also likely get spyware critics and surveillance experts alike rubbernecking: It claims to give police the power to remotely break into encrypted instant messaging communications, whether that’s WhatsApp, Signal, Facebook Messenger or Gmail, the industry sources said. One other spyware industry executive said it also promises to get longer-lasting access to a device, even when it’s rebooted.

[…]

Two industry sources said they believed Paragon was trying to set itself apart further by promising to get access to the instant messaging applications on a device, rather than taking complete control of everything on a phone. One of the sources said they understood that Paragon’s spyware exploits the protocols of end-to-end encrypted apps, meaning it would hack into messages via vulnerabilities in the core ways in which the software operates.

Read that last sentence again: Paragon uses unpatched zero-day exploits in the software to hack messaging apps.

NSO Group Hacked

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/nso-group-hacked.html

NSO Group, the Israeli cyberweapons arms manufacturer behind the Pegasus spyware — used by authoritarian regimes around the world to spy on dissidents, journalists, human rights workers, and others — was hacked. Or, at least, an enormous trove of documents was leaked to journalists.

There’s a lot to read out there. Amnesty International has a report. Citizen Lab conducted an independent analysis. The Guardian has extensive coverage. More coverage.

Most interesting is a list of over 50,000 phone numbers that were being spied on by NSO Group’s software. Why does NSO Group have that list? The obvious answer is that NSO Group provides spyware-as-a-service, and centralizes operations somehow. Nicholas Weaver postulates that “part of the reason that NSO keeps a master list of targeting…is they hand it off to Israeli intelligence.”

This isn’t the first time NSO Group has been in the news. Citizen Lab has been researching and reporting on its actions since 2016. It’s been linked to the Saudi murder of Jamal Khashoggi. It is extensively used by Mexico to spy on — among others — supporters of that country’s soda tax.

NSO Group seems to be a completely deplorable company, so it’s hard to have any sympathy for it. As I previously wrote about another hack of another cyberweapons arms manufacturer: “It’s one thing to have dissatisfied customers. It’s another to have dissatisfied customers with death squads.” I’d like to say that I don’t know how the company will survive this, but — sadly — I think it will.

Finally: here’s a tool that you can use to test if your iPhone or Android is infected with Pegasus. (Note: it’s not easy to use.)

Candiru: Another Cyberweapons Arms Manufacturer

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/candiru-another-cyberweapons-arms-manufacturer.html

Citizen Lab has identified yet another Israeli company that sells spyware to governments around the world: Candiru.

From the report:

Summary:

  • Candiru is a secretive Israel-based company that sells spyware exclusively to governments. Reportedly, their spyware can infect and monitor iPhones, Androids, Macs, PCs, and cloud accounts.
  • Using Internet scanning we identified more than 750 websites linked to Candiru’s spyware infrastructure. We found many domains masquerading as advocacy organizations such as Amnesty International, the Black Lives Matter movement, as well as media companies, and other civil-society themed entities.
  • We identified a politically active victim in Western Europe and recovered a copy of Candiru’s Windows spyware.
  • Working with Microsoft Threat Intelligence Center (MSTIC) we analyzed the spyware, resulting in the discovery of CVE-2021-31979 and CVE-2021-33771 by Microsoft, two privilege escalation vulnerabilities exploited by Candiru. Microsoft patched both vulnerabilities on July 13th, 2021.
  • As part of their investigation, Microsoft observed at least 100 victims in Palestine, Israel, Iran, Lebanon, Yemen, Spain, United Kingdom, Turkey, Armenia, and Singapore. Victims include human rights defenders, dissidents, journalists, activists, and politicians.
  • We provide a brief technical overview of the Candiru spyware’s persistence mechanism and some details about the spyware’s functionality.
  • Candiru has made efforts to obscure its ownership structure, staffing, and investment partners. Nevertheless, we have been able to shed some light on those areas in this report.

We’re not going to be able to secure the Internet until we deal with the companies that engage in the international cyber-arms trade.

China Taking Control of Zero-Day Exploits

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/china-taking-control-of-zero-day-exploits.html

China is making sure that all newly discovered zero-day exploits are disclosed to the government.

Under the new rules, anyone in China who finds a vulnerability must tell the government, which will decide what repairs to make. No information can be given to “overseas organizations or individuals” other than the product’s manufacturer.

No one may “collect, sell or publish information on network product security vulnerabilities,” say the rules issued by the Cyberspace Administration of China and the police and industry ministries.

This just blocks the cyber-arms trade. It doesn’t prevent researchers from telling the products’ companies, even if they are outside of China.

AI-Piloted Fighter Jets

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/06/ai-piloted-fighter-jets.html

News from Georgetown’s Center for Security and Emerging Technology:

China Claims Its AI Can Beat Human Pilots in Battle: Chinese state media reported that an AI system had successfully defeated human pilots during simulated dogfights. According to the Global Times report, the system had shot down several PLA pilots during a handful of virtual exercises in recent years. Observers outside China noted that while reports coming out of state-controlled media outlets should be taken with a grain of salt, the capabilities described in the report are not outside the realm of possibility. Last year, for example, an AI agent defeated a U.S. Air Force F-16 pilot five times out of five as part of DARPA’s AlphaDogfight Trial (which we covered at the time). While the Global Times report indicated plans to incorporate AI into future fighter planes, it is not clear how far away the system is from real-world testing. At the moment, the system appears to be used only for training human pilots. DARPA, for its part, is aiming to test dogfights with AI-piloted subscale jets later this year and with full-scale jets in 2023 and 2024.

Mollitiam Industries is the Newest Cyberweapons Arms Manufacturer

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/06/mollitiam-industries-is-the-newest-cyberweapons-arms-manufacturer.html

Wired is reporting on a company called Mollitiam Industries:

Marketing materials left exposed online by a third-party claim Mollitiam’s interception products, dubbed “Invisible Man” and “Night Crawler,” are capable of remotely accessing a target’s files, location, and covertly turning on a device’s camera and microphone. Its spyware is also said to be equipped with a keylogger, which means every keystroke made on an infected device — including passwords, search queries and messages sent via encrypted messaging apps — can be tracked and monitored.

To evade detection, the malware makes use of the company’s so-called “invisible low stealth technology” and its Android product is advertised as having “low data and battery consumption” to prevent people from suspecting their phone or tablet has been infected. Mollitiam is also currently marketing a tool that it claims enables “mass surveillance of digital profiles and identities” across social media and the dark web.

Mexican Drug Cartels with High-Tech Spyware

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/mexican-drug-cartels-with-high-tech-spyware.html

Sophisticated spyware, sold by surveillance tech companies to Mexican government agencies, are ending up in the hands of drug cartels:

As many as 25 private companies — including the Israeli company NSO Group and the Italian firm Hacking Team — have sold surveillance software to Mexican federal and state police forces, but there is little or no regulation of the sector — and no way to control where the spyware ends up, said the officials.

Lots of details in the article. The cyberweapons arms business is immoral in many ways. This is just one of them.

WhatsApp Sues NSO Group

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/whatsapp_sues_n.html

WhatsApp is suing the Israeli cyberweapons arms manufacturer NSO Group in California court:

WhatsApp’s lawsuit, filed in a California court on Tuesday, has demanded a permanent injunction blocking NSO from attempting to access WhatsApp computer systems and those of its parent company, Facebook.

It has also asked the court to rule that NSO violated US federal law and California state law against computer fraud, breached their contracts with WhatsApp and “wrongfully trespassed” on Facebook’s property.

This could be interesting.

EDITED TO ADD: Citizen Lab has a research paper in the technology involved in this case. WhatsApp has an op ed on their actions. And this is a good news article on how the attack worked.

EDITED TO ADD: Facebook is deleting the accounts of NSO Group employees.

Dark Caracal: Global Espionage Malware from Lebanon

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/dark_caracal_gl.html

The EFF and Lookout are reporting on a new piece of spyware operating out of Lebanon. It primarily targets mobile devices compromised by fake secure messaging clients like Signal and WhatsApp.

From the Lookout announcement:

Dark Caracal has operated a series of multi-platform campaigns starting from at least January 2012, according to our research. The campaigns span across 21+ countries and thousands of victims. Types of data stolen include documents, call records, audio recordings, secure messaging client content, contact information, text messages, photos, and account data. We believe this actor is operating their campaigns from a building belonging to the Lebanese General Security Directorate (GDGS) in Beirut.

It looks like a complex infrastructure that’s been well-developed, and continually upgraded and maintained. It appears that a cyberweapons arms manufacturer is selling this tool to different countries. From the full report:

Dark Caracal is using the same infrastructure as was previously seen in the Operation Manul campaign, which targeted journalists, lawyers, and dissidents critical of the government of Kazakhstan.

There’s a lot in the full report. It’s worth reading.

Three news articles.

CIA’s Pandemic Toolkit

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/cias_pandemic_t.html

WikiLeaks is still dumping CIA cyberweapons on the Internet. Its latest dump is something called “Pandemic”:

The Pandemic leak does not explain what the CIA’s initial infection vector is, but does describe it as a persistent implant.

“As the name suggests, a single computer on a local network with shared drives that is infected with the ‘Pandemic’ implant will act like a ‘Patient Zero’ in the spread of a disease,” WikiLeaks said in its summary description. “‘Pandemic’ targets remote users by replacing application code on-the-fly with a Trojaned version if the program is retrieved from the infected machine.”

The key to evading detection is its ability to modify or replace requested files in transit, hiding its activity by never touching the original file. The new attack then executes only on the machine requesting the file.

Version 1.1 of Pandemic, according to the CIA’s documentation, can target and replace up to 20 different files with a maximum size of 800MB for a single replacement file.

“It will infect remote computers if the user executes programs stored on the pandemic file server,” WikiLeaks said. “Although not explicitly stated in the documents, it seems technically feasible that remote computers that provide file shares themselves become new pandemic file servers on the local network to reach new targets.”

The CIA describes Pandemic as a tool that runs as kernel shellcode that installs a file system filter driver. The driver is used to replace a file with a payload when a user on the local network accesses the file over SMB.

WikiLeaks page. News article.

Who Are the Shadow Brokers?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/who_are_the_sha.html

In 2013, a mysterious group of hackers that calls itself the Shadow Brokers stole a few disks full of NSA secrets. Since last summer, they’ve been dumping these secrets on the Internet. They have publicly embarrassed the NSA and damaged its intelligence-gathering capabilities, while at the same time have put sophisticated cyberweapons in the hands of anyone who wants them. They have exposed major vulnerabilities in Cisco routers, Microsoft Windows, and Linux mail servers, forcing those companies and their customers to scramble. And they gave the authors of the WannaCry ransomware the exploit they needed to infect hundreds of thousands of computer worldwide this month.

After the WannaCry outbreak, the Shadow Brokers threatened to release more NSA secrets every month, giving cybercriminals and other governments worldwide even more exploits and hacking tools.

Who are these guys? And how did they steal this information? The short answer is: we don’t know. But we can make some educated guesses based on the material they’ve published.

The Shadow Brokers suddenly appeared last August, when they published a series of hacking tools and computer exploits­ — vulnerabilities in common software — ­from the NSA. The material was from autumn 2013, and seems to have been collected from an external NSA staging server, a machine that is owned, leased, or otherwise controlled by the US, but with no connection to the agency. NSA hackers find obscure corners of the Internet to hide the tools they need as they go about their work, and it seems the Shadow Brokers successfully hacked one of those caches.

In total, the group has published four sets of NSA material: a set of exploits and hacking tools against routers, the devices that direct data throughout computer networks; a similar collection against mail servers; another collection against Microsoft Windows; and a working directory of an NSA analyst breaking into the SWIFT banking network. Looking at the time stamps on the files and other material, they all come from around 2013. The Windows attack tools, published last month, might be a year or so older, based on which versions of Windows the tools support.

The releases are so different that they’re almost certainly from multiple sources at the NSA. The SWIFT files seem to come from an internal NSA computer, albeit one connected to the Internet. The Microsoft files seem different, too; they don’t have the same identifying information that the router and mail server files do. The Shadow Brokers have released all the material unredacted, without the care journalists took with the Snowden documents or even the care WikiLeaks has taken with the CIA secrets it’s publishing. They also posted anonymous messages in bad English but with American cultural references.

Given all of this, I don’t think the agent responsible is a whistleblower. While possible, it seems like a whistleblower wouldn’t sit on attack tools for three years before publishing. They would act more like Edward Snowden or Chelsea Manning, collecting for a time and then publishing immediately­ — and publishing documents that discuss what the US is doing to whom. That’s not what we’re seeing here; it’s simply a bunch of exploit code, which doesn’t have the political or ethical implications that a whistleblower would want to highlight. The SWIFT documents are records of an NSA operation, and the other posted files demonstrate that the NSA is hoarding vulnerabilities for attack rather than helping fix them and improve all of our security.

I also don’t think that it’s random hackers who stumbled on these tools and are just trying to harm the NSA or the US. Again, the three-year wait makes no sense. These documents and tools are cyber-Kryptonite; anyone who is secretly hoarding them is in danger from half the intelligence agencies in the world. Additionally, the publication schedule doesn’t make sense for the leakers to be cybercriminals. Criminals would use the hacking tools for themselves, incorporating the exploits into worms and viruses, and generally profiting from the theft.

That leaves a nation state. Whoever got this information years before and is leaking it now has to be both capable of hacking the NSA and willing to publish it all. Countries like Israel and France are capable, but would never publish, because they wouldn’t want to incur the wrath of the US. Country like North Korea or Iran probably aren’t capable. (Additionally, North Korea is suspected of being behind WannaCry, which was written after the Shadow Brokers released that vulnerability to the public.) As I’ve written previously, the obvious list of countries who fit my two criteria is small: Russia, China, and­ — I’m out of ideas. And China is currently trying to make nice with the US.

It was generally believed last August, when the first documents were released and before it became politically controversial to say so, that the Russians were behind the leak, and that it was a warning message to President Barack Obama not to retaliate for the Democratic National Committee hacks. Edward Snowden guessed Russia, too. But the problem with the Russia theory is, why? These leaked tools are much more valuable if kept secret. Russia could use the knowledge to detect NSA hacking in its own country and to attack other countries. By publishing the tools, the Shadow Brokers are signaling that they don’t care if the US knows the tools were stolen.

Sure, there’s a chance the attackers knew that the US knew that the attackers knew — ­and round and round we go. But the “we don’t give a damn” nature of the releases points to an attacker who isn’t thinking strategically: a lone hacker or hacking group, which clashes with the nation-state theory.

This is all speculation on my part, based on discussion with others who don’t have access to the classified forensic and intelligence analysis. Inside the NSA, they have a lot more information. Many of the files published include operational notes and identifying information. NSA researchers know exactly which servers were compromised, and through that know what other information the attackers would have access to. As with the Snowden documents, though, they only know what the attackers could have taken and not what they did take. But they did alert Microsoft about the Windows vulnerability the Shadow Brokers released months in advance. Did they have eavesdropping capability inside whoever stole the files, as they claimed to when the Russians attacked the State Department? We have no idea.

So, how did the Shadow Brokers do it? Did someone inside the NSA accidentally mount the wrong server on some external network? That’s possible, but seems very unlikely for the organization to make that kind of rookie mistake. Did someone hack the NSA itself? Could there be a mole inside the NSA?

If it is a mole, my guess is that the person was arrested before the Shadow Brokers released anything. No country would burn a mole working for it by publishing what that person delivered while he or she was still in danger. Intelligence agencies know that if they betray a source this severely, they’ll never get another one.

That points to two possibilities. The first is that the files came from Hal Martin. He’s the NSA contractor who was arrested in August for hoarding agency secrets in his house for two years. He can’t be the publisher, because the Shadow Brokers are in business even though he is in prison. But maybe the leaker got the documents from his stash, either because Martin gave the documents to them or because he himself was hacked. The dates line up, so it’s theoretically possible. There’s nothing in the public indictment against Martin that speaks to his selling secrets to a foreign power, but that’s just the sort of thing that would be left out. It’s not needed for a conviction.

If the source of the documents is Hal Martin, then we can speculate that a random hacker did in fact stumble on it — ­no need for nation-state cyberattack skills.

The other option is a mysterious second NSA leaker of cyberattack tools. Could this be the person who stole the NSA documents and passed them on to someone else? The only time I have ever heard about this was from a Washington Post story about Martin:

There was a second, previously undisclosed breach of cybertools, discovered in the summer of 2015, which was also carried out by a TAO employee [a worker in the Office of Tailored Access Operations], one official said. That individual also has been arrested, but his case has not been made public. The individual is not thought to have shared the material with another country, the official said.

Of course, “not thought to have” is not the same as not having done so.

It is interesting that there have been no public arrests of anyone in connection with these hacks. If the NSA knows where the files came from, it knows who had access to them — ­and it’s long since questioned everyone involved and should know if someone deliberately or accidentally lost control of them. I know that many people, both inside the government and out, think there is some sort of domestic involvement; things may be more complicated than I realize.

It’s also not over. Last week, the Shadow Brokers were back, with a rambling and taunting message announcing a “Data Dump of the Month” service. They’re offering to sell unreleased NSA attack tools­ — something they also tried last August­ — with the threat to publish them if no one pays. The group has made good on their previous boasts: In the coming months, we might see new exploits against web browsers, networking equipment, smartphones, and operating systems — Windows in particular. Even scarier, they’re threatening to release raw NSA intercepts: data from the SWIFT network and banks, and “compromised data from Russian, Chinese, Iranian, or North Korean nukes and missile programs.”

Whoever the Shadow Brokers are, however they stole these disks full of NSA secrets, and for whatever reason they’re releasing them, it’s going to be a long summer inside of Fort Meade­ — as it will be for the rest of us.

This essay previously appeared in the Atlantic, and is an update of this essay from Lawfare.

Surveillance and our Insecure Infrastructure

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/surveillance_an_2.html

Since Edward Snowden revealed to the world the extent of the NSA’s global surveillance network, there has been a vigorous debate in the technological community about what its limits should be.

Less discussed is how many of these same surveillance techniques are used by other — smaller and poorer — more totalitarian countries to spy on political opponents, dissidents, human rights defenders; the press in Toronto has documented some of the many abuses, by countries like Ethiopia , the UAE, Iran, Syria, Kazakhstan , Sudan, Ecuador, Malaysia, and China.

That these countries can use network surveillance technologies to violate human rights is a shame on the world, and there’s a lot of blame to go around.

We can point to the governments that are using surveillance against their own citizens.

We can certainly blame the cyberweapons arms manufacturers that are selling those systems, and the countries — mostly European — that allow those arms manufacturers to sell those systems.

There’s a lot more the global Internet community could do to limit the availability of sophisticated Internet and telephony surveillance equipment to totalitarian governments. But I want to focus on another contributing cause to this problem: the fundamental insecurity of our digital systems that makes this a problem in the first place.

IMSI catchers are fake mobile phone towers. They allow someone to impersonate a cell network and collect information about phones in the vicinity of the device and they’re used to create lists of people who were at a particular event or near a particular location.

Fundamentally, the technology works because the phone in your pocket automatically trusts any cell tower to which it connects. There’s no security in the connection protocols between the phones and the towers.

IP intercept systems are used to eavesdrop on what people do on the Internet. Unlike the surveillance that happens at the sites you visit, by companies like Facebook and Google, this surveillance happens at the point where your computer connects to the Internet. Here, someone can eavesdrop on everything you do.

This system also exploits existing vulnerabilities in the underlying Internet communications protocols. Most of the traffic between your computer and the Internet is unencrypted, and what is encrypted is often vulnerable to man-in-the-middle attacks because of insecurities in both the Internet protocols and the encryption protocols that protect it.

There are many other examples. What they all have in common is that they are vulnerabilities in our underlying digital communications systems that allow someone — whether it’s a country’s secret police, a rival national intelligence organization, or criminal group — to break or bypass what security there is and spy on the users of these systems.

These insecurities exist for two reasons. First, they were designed in an era where computer hardware was expensive and inaccessibility was a reasonable proxy for security. When the mobile phone network was designed, faking a cell tower was an incredibly difficult technical exercise, and it was reasonable to assume that only legitimate cell providers would go to the effort of creating such towers.

At the same time, computers were less powerful and software was much slower, so adding security into the system seemed like a waste of resources. Fast forward to today: computers are cheap and software is fast, and what was impossible only a few decades ago is now easy.

The second reason is that governments use these surveillance capabilities for their own purposes. The FBI has used IMSI-catchers for years to investigate crimes. The NSA uses IP interception systems to collect foreign intelligence. Both of these agencies, as well as their counterparts in other countries, have put pressure on the standards bodies that create these systems to not implement strong security.

Of course, technology isn’t static. With time, things become cheaper and easier. What was once a secret NSA interception program or a secret FBI investigative tool becomes usable by less-capable governments and cybercriminals.

Man-in-the-middle attacks against Internet connections are a common criminal tool to steal credentials from users and hack their accounts.

IMSI-catchers are used by criminals, too. Right now, you can go onto Alibaba.com and buy your own IMSI catcher for under $2,000.

Despite their uses by democratic governments for legitimate purposes, our security would be much better served by fixing these vulnerabilities in our infrastructures.

These systems are not only used by dissidents in totalitarian countries, they’re also used by legislators, corporate executives, critical infrastructure providers, and many others in the US and elsewhere.

That we allow people to remain insecure and vulnerable is both wrongheaded and dangerous.

Earlier this month, two American legislators — Senator Ron Wyden and Rep Ted Lieu — sent a letter to the chairman of the Federal Communications Commission, demanding that he do something about the country’s insecure telecommunications infrastructure.

They pointed out that not only are insecurities rampant in the underlying protocols and systems of the telecommunications infrastructure, but also that the FCC knows about these vulnerabilities and isn’t doing anything to force the telcos to fix them.

Wyden and Lieu make the point that fixing these vulnerabilities is a matter of US national security, but it’s also a matter of international human rights. All modern communications technologies are global, and anything the US does to improve its own security will also improve security worldwide.

Yes, it means that the FBI and the NSA will have a harder job spying, but it also means that the world will be a safer and more secure place.

This essay previously appeared on AlJazeera.com.

Some notes on the RAND 0day report

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/03/some-notes-on-rand-0day-report.html

The RAND Corporation has a research report on the 0day market [ * ]. It’s pretty good. They talked to all the right people. It should be considered the seminal work on the issue. They’ve got the pricing about right ($1 million for full chain iPhone exploit, but closer to $100k for others). They’ve got the stats about right (5% chance somebody else will discover an exploit).

Yet, they’ve got some problems, namely phrasing the debate as activists want, rather than a neutral view of the debate.

The report frequently uses the word “stockpile”. This is a biased term used by activists. According to the dictionary, it means:

a large accumulated stock of goods or materials, especially one held in reserve for use at a time of shortage or other emergency.

Activists paint the picture that the government (NSA, CIA, DoD, FBI) buys 0day to hold in reserve in case they later need them. If that’s the case, then it seems reasonable that it’s better to disclose/patch the vuln then let it grow moldy in a cyberwarehouse somewhere.

But that’s not how things work. The government buys vulns it has immediate use for (primarily). Almost all vulns it buys are used within 6 months. Most vulns in its “stockpile” have been used in the previous year. These cyberweapons are not in a warehouse, but in active use on the front lines.

This is top secret, of course, so people assume it’s not happening. They hear about no cyber operations (except Stuxnet), so they assume such operations aren’t occurring. Thus, they build up the stockpiling assumption rather than the active use assumption.

If the RAND wanted to create an even more useful survey, they should figure out how many thousands of times per day our government (NSA, CIA, DoD, FBI) exploits 0days. They should characterize who they target (e.g. terrorists, child pornographers), success rate, and how many people they’ve killed based on 0days. It’s this data, not patching, that is at the root of the policy debate.

That 0days are actively used determines pricing. If the government doesn’t have immediate need for a vuln, it won’t pay much for it, if anything at all. Conversely, if the government has urgent need for a vuln, it’ll pay a lot.

Let’s say you have a remote vuln for Samsung TVs. You go to the NSA and offer it to them. They tell you they aren’t interested, because they see no near term need for it. Then a year later, spies reveal ISIS has stolen a truckload of Samsung TVs, put them in all the meeting rooms, and hooked them to Internet for video conferencing. The NSA then comes back to you and offers $500k for the vuln.

Likewise, the number of sellers affects the price. If you know they desperately need the Samsung TV 0day, but they are only offering $100k, then it likely means that there’s another seller also offering such a vuln.

That’s why iPhone vulns are worth $1 million for a full chain exploit, from browser to persistence. They use it a lot, it’s a major part of ongoing cyber operations. Each time Apple upgrades iOS, the change breaks part of the existing chain, and the government is keen on getting a new exploit to fix it. They’ll pay a lot to the first vuln seller who can give them a new exploit.

Thus, there are three prices the government is willing to pay for an 0day (the value it provides to the government):

  • the price for an 0day they will actively use right now (high)
  • the price for an 0day they’ll stockpile for possible use in the future (low)
  • the price for an 0day they’ll disclose to the vendor to patch (very low)

That these are different prices is important to the policy debate. When activists claim the government should disclose the 0day they acquire, they are ignoring the price the 0day was acquired for. Since the government actively uses the 0day, they are acquired for a high-price, with their “use” value far higher than their “patch” value. It’s an absurd argument to make that they government should then immediately discard that money, to pay “use value” prices for “patch” results.

If the policy becomes that the NSA/CIA should disclose/patch the 0day they buy, it doesn’t mean business as usual acquiring vulns. It instead means they’ll stop buying 0day.

In other words, “patching 0day” is not an outcome on either side of the debate. Either the government buys 0day to use, or it stops buying 0day. In neither case does patching happen.

The real argument is whether the government (NSA, CIA, DoD, FBI) should be acquiring, weaponizing, and using 0day in the first place. It demands that we unilaterally disarm our military, intelligence, and law enforcement, preventing them from using 0days against our adversaries while our adversaries continue to use 0days against us.

That’s the gaping hole in both the RAND paper and most news reporting of this controversy. They characterize the debate the way activists want, as if the only question is the value of patching. They avoid talking about unilateral cyberdisarmament, even though that’s the consequence of the policy they are advocating. They avoid comparing the value of 0days to our country for active use (high) compared to the value to to our country for patching (very low).

Conclusion

It’s nice that the RAND paper studied the value of patching and confirmed it’s low, that only around 5% of our cyber-arsenal is likely to be found by others. But it’d be nice if they also looked at the point of view of those actively using 0days on a daily basis, rather than phrasing the debate the way activists want.

Adm. Rogers Talks about Buying Cyberweapons

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/adm_rogers_talk.html

At a talk last week, the head of US Cyber Command and the NSA Mike Rogers talked about the US buying cyberweapons from arms manufacturers.

“In the application of kinetic functionality — weapons — we go to the private sector and say, ‘Build this thing we call a [joint directed-attack munition], a [Tomahawk land-attack munition].’ Fill in the blank,” he said.

“On the offensive side, to date, we have done almost all of our weapons development internally. And part of me goes — five to ten years from now is that a long-term sustainable model? Does that enable you to access fully the capabilities resident in the private sector? I’m still trying to work my way through that, intellectually.”

Businesses already flog exploits, security vulnerability details, spyware, and similar stuff to US intelligence agencies, and Rogers is clearly considering stepping that trade up a notch.

Already, Third World countries are buying from cyberweapons arms manufacturers. My guess is that he’s right and the US will be doing that in the future, too.

Is WhatsApp Hacked?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/is_whatsapp_hac.html

Forbes is reporting that the Israeli cyberweapons arms manufacturer Wintego has a man-in-the-middle exploit against WhatsApp.

It’s a weird story. I’m not sure how they do it, but something doesn’t sound right.

Another possibility is that CatchApp is malware thrust onto a device over Wi-Fi that specifically targets WhatsApp. But it’s almost certain the product cannot crack the latest standard of WhatsApp cryptography, said Matthew Green, a cryptography expert and assistant professor at the Johns Hopkins Information Security Institute. Green, who has been impressed by the quality of the Signal code, added: “They would have to defeat both the encryption to and from the server and the end-to-end Signal encryption. That does not seem feasible at all, even with a Wi-Fi access point.

“I would bet mundanely the password stuff is just plain phishing. You go to some site, it asks for your Google account, you type it in without looking closely at the address bar.

“But the WhatsApp stuff manifestly should not be vulnerable like that. Interesting.”

Neither WhatsApp nor the crypto whizz behind Signal, Moxie Marlinspike, were happy to comment unless more specific details were revealed about the tool’s capability. Either Wintego is embellishing what its real capability is, or it has a set of exploits that the rest of the world doesn’t yet know about.

Leaked Product Demo from RCS Labs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/leaked_product_.html

We have leak from yet another cyberweapons arms manufacturer: the Italian company RCS Labs. Vice Motherboard reports on a surveillance video demo:

The video shows an RCS Lab employee performing a live demo of the company’s spyware to an unidentified man, including a tutorial on how to use the spyware’s control software to perform a man-in-the-middle attack and infect a target computer who wanted to visit a specific website.

RCS Lab’s spyware, called Mito3, allows agents to easily set up these kind of attacks just by applying a rule in the software settings. An agent can choose whatever site he or she wants to use as a vector, click on a dropdown menu and select “inject HTML” to force the malicious popup to appear, according to the video.

Mito3 allows customers to listen in on the target, intercept voice calls, text messages, video calls, social media activities, and chats, apparently both on computer and mobile platforms. It also allows police to track the target and geo-locate it thanks to the GPS. It even offers automatic transcription of the recordings, according to a confidential brochure obtained by Motherboard.

Slashdot thread

NSO Group

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/nso_group.html

We’re starting to see some information on the Israeli cyberweapons arms manufacturer that sold the iPhone zero-day exploit to the United Arab Emirates so they could spy on human rights defenders.

EDITED TO ADD (9/1): There is criticism in the comments about me calling NSO Group an Israeli company. I was just repeating the news articles, but further research indicates that it is Israeli-founded and Israeli-based, but 100% owned by an American private equity firm.