Tag Archives: attribution

Nation-State Espionage Campaigns against Middle East Defense Contractors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/nation-state_es.html

Report on espionage attacks using LinkedIn as a vector for malware, with details and screenshots. They talk about “several hints suggesting a possible link” to the Lazarus group (aka North Korea), but that’s by no means definite.

As part of the initial compromise phase, the Operation In(ter)ception attackers had created fake LinkedIn accounts posing as HR representatives of well-known companies in the aerospace and defense industries. In our investigation, we’ve seen profiles impersonating Collins Aerospace (formerly Rockwell Collins) and General Dynamics, both major US corporations in the field.

Detailed report.

Identifying and Arresting Ransomware Criminals

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/11/identifying_and.html

The Wall Street Journal has a story about how two people were identified as the perpetrators of a ransomware scheme. They were found because — as generally happens — they made mistakes covering their tracks. They were investigated because they had the bad luck of locking up Washington, DC’s video surveillance cameras a week before the 2017 inauguration.

EDITED TO ADD (11/13): Link without a paywall.

New Reductor Nation-State Malware Compromises TLS

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/new_reductor_na.html

Kaspersky has a detailed blog post about a new piece of sophisticated malware that it’s calling Reductor. The malware is able to compromise TLS traffic by infecting the computer with hacked TLS engine substituted on the fly, “marking” infected TLS handshakes by compromising the underlining random-number generator, and adding new digital certificates. The result is that the attacker can identify, intercept, and decrypt TLS traffic from the infected computer.

The Kaspersky Attribution Engine shows strong code similarities between this family and the COMPfun Trojan. Moreover, further research showed that the original COMpfun Trojan most probably is used as a downloader in one of the distribution schemes. Based on these similarities, we’re quite sure the new malware was developed by the COMPfun authors.

The COMpfun malware was initially documented by G-DATA in 2014. Although G-DATA didn’t identify which actor was using this malware, Kaspersky tentatively linked it to the Turla APT, based on the victimology. Our telemetry indicates that the current campaign using Reductor started at the end of April 2019 and remained active at the time of writing (August 2019). We identified targets in Russia and Belarus.

[…]

Turla has in the past shown many innovative ways to accomplish its goals, such as using hijacked satellite infrastructure. This time, if we’re right that Turla is the actor behind this new wave of attacks, then with Reductor it has implemented a very interesting way to mark a host’s encrypted TLS traffic by patching the browser without parsing network packets. The victimology for this new campaign aligns with previous Turla interests.

We didn’t observe any MitM functionality in the analyzed malware samples. However, Reductor is able to install digital certificates and mark the targets’ TLS traffic. It uses infected installers for initial infection through HTTP downloads from warez websites. The fact the original files on these sites are not infected also points to evidence of subsequent traffic manipulation.

The attribution chain from Reductor to COMPfun to Turla is thin. Speculation is that the attacker behind all of this is Russia.

WhatsApp Vulnerability Fixed

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/whatsapp_vulner_1.html

WhatsApp fixed a devastating vulnerability that allowed someone to remotely hack a phone by initiating a WhatsApp voice call. The recipient didn’t even have to answer the call.

The Israeli cyber-arms manufacturer NSO Group is believed to be behind the exploit, but of course there is no definitive proof.

If you use WhatsApp, update your app immediately.

More on the Triton Malware

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/04/more_on_the_tri.html

FireEye is releasing much more information about the Triton malware that attacks critical infrastructure. It has been discovered in more places.

This is also a good — but older — article on Triton. We don’t know who wrote it. Initial speculation was Iran; more recent speculation is Russia. Both are still speculations.

Fireeye report. BoingBoing post.

Marriott Hack Reported as Chinese State-Sponsored

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/marriott_hack_r.html

The New York Times and Reuters are reporting that China was behind the recent hack of Marriott Hotels. Note that this is still uncomfirmed, but interesting if it is true.

Reuters:

Private investigators looking into the breach have found hacking tools, techniques and procedures previously used in attacks attributed to Chinese hackers, said three sources who were not authorized to discuss the company’s private probe into the attack.

That suggests that Chinese hackers may have been behind a campaign designed to collect information for use in Beijing’s espionage efforts and not for financial gain, two of the sources said.

While China has emerged as the lead suspect in the case, the sources cautioned it was possible somebody else was behind the hack because other parties had access to the same hacking tools, some of which have previously been posted online.

Identifying the culprit is further complicated by the fact that investigators suspect multiple hacking groups may have simultaneously been inside Starwood’s computer networks since 2014, said one of the sources.

I used to have opinions about whether these attributions are true or not. These days, I tend to wait and see.

Chinese Supply Chain Hardware Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/chinese_supply_.html

Bloomberg is reporting about a Chinese espionage operating involving inserting a tiny chip into computer products made in China.

I’ve written about (alternate link) this threat more generally. Supply-chain security is an insurmountably hard problem. Our IT industry is inexorably international, and anyone involved in the process can subvert the security of the end product. No one wants to even think about a US-only anything; prices would multiply many times over.

We cannot trust anyone, yet we have no choice but to trust everyone. No one is ready for the costs that solving this would entail.

EDITED TO ADD: Apple, Amazon, and others are denying that this attack is real. Stay tuned for more information.

EDITED TO ADD (9/6): TheGrugq comments. Bottom line is that we still don’t know. I think that precisely exemplifies the greater problem.

EDITED TO ADD (10/7): Both the US Department of Homeland Security and the UK National Cyber Security Centre claim to believe the tech companies. Bloomberg is standing by its story. Nicholas Weaver writes that the story is plausible.

Russians Hacked the Olympics

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/russians_hacked.html

Two weeks ago, I blogged about the myriad of hacking threats against the Olympics. Last week, the Washington Post reported that Russia hacked the Olympics network and tried to cast the blame on North Korea.

Of course, the evidence is classified, so there’s no way to verify this claim. And while the article speculates that the hacks were a retaliation for Russia being banned due to doping, that doesn’t ring true to me. If they tried to blame North Korea, it’s more likely that they’re trying to disrupt something between North Korea, South Korea, and the US. But I don’t know.

Internet Security Threats at the Olympics

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/internet_securi.html

There are a lot:

The cybersecurity company McAfee recently uncovered a cyber operation, dubbed Operation GoldDragon, attacking South Korean organizations related to the Winter Olympics. McAfee believes the attack came from a nation state that speaks Korean, although it has no definitive proof that this is a North Korean operation. The victim organizations include ice hockey teams, ski suppliers, ski resorts, tourist organizations in Pyeongchang, and departments organizing the Pyeongchang Olympics.

Meanwhile, a Russia-linked cyber attack has already stolen and leaked documents from other Olympic organizations. The so-called Fancy Bear group, or APT28, began its operations in late 2017 –­ according to Trend Micro and Threat Connect, two private cybersecurity firms­ — eventually publishing documents in 2018 outlining the political tensions between IOC officials and World Anti-Doping Agency (WADA) officials who are policing Olympic athletes. It also released documents specifying exceptions to anti-doping regulations granted to specific athletes (for instance, one athlete was given an exception because of his asthma medication). The most recent Fancy Bear leak exposed details about a Canadian pole vaulter’s positive results for cocaine. This group has targeted WADA in the past, specifically during the 2016 Rio de Janeiro Olympics. Assuming the attribution is right, the action appears to be Russian retaliation for the punitive steps against Russia.

A senior analyst at McAfee warned that the Olympics may experience more cyber attacks before closing ceremonies. A researcher at ThreatConnect asserted that organizations like Fancy Bear have no reason to stop operations just because they’ve already stolen and released documents. Even the United States Department of Homeland Security has issued a notice to those traveling to South Korea to remind them to protect themselves against cyber risks.

One presumes the Olympics network is sufficiently protected against the more pedestrian DDoS attacks and the like, but who knows?

EDITED TO ADD: There was already one attack.

The problematic Wannacry North Korea attribution

Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/01/the-problematic-wannacry-north-korea.html

Last month, the US government officially “attributed” the Wannacry ransomware worm to North Korea. This attribution has three flaws, which are a good lesson for attribution in general.

It was an accident

The most important fact about Wannacry is that it was an accident. We’ve had 30 years of experience with Internet worms teaching us that worms are always accidents. While launching worms may be intentional, their effects cannot be predicted. While they appear to have targets, like Slammer against South Korea, or Witty against the Pentagon, further analysis shows this was just a random effect that was impossible to predict ahead of time. Only in hindsight are these effects explainable.
We should hold those causing accidents accountable, too, but it’s a different accountability. The U.S. has caused more civilian deaths in its War on Terror than the terrorists caused triggering that war. But we hold these to be morally different: the terrorists targeted the innocent, whereas the U.S. takes great pains to avoid civilian casualties. 
Since we are talking about blaming those responsible for accidents, we also must include the NSA in that mix. The NSA created, then allowed the release of, weaponized exploits. That’s like accidentally dropping a load of unexploded bombs near a village. When those bombs are then used, those having lost the weapons are held guilty along with those using them. Yes, while we should blame the hacker who added ETERNAL BLUE to their ransomware, we should also blame the NSA for losing control of ETERNAL BLUE.

A country and its assets are different

Was it North Korea, or hackers affilliated with North Korea? These aren’t the same.

It’s hard for North Korea to have hackers of its own. It doesn’t have citizens who grow up with computers to pick from. Moreover, an internal hacking corps would create tainted citizens exposed to dangerous outside ideas. Update: Some people have pointed out that Kim Il-sung University in the capital does have some contact with the outside world, with academics granted limited Internet access, so I guess some tainting is allowed. Still, what we know of North Korea hacking efforts largley comes from hackers they employ outside North Korea. It was the Lazurus Group, outside North Korea, that did Wannacry.
Instead, North Korea develops external hacking “assets”, supporting several external hacking groups in China, Japan, and South Korea. This is similar to how intelligence agencies develop human “assets” in foreign countries. While these assets do things for their handlers, they also have normal day jobs, and do many things that are wholly independent and even sometimes against their handler’s interests.
For example, this Muckrock FOIA dump shows how “CIA assets” independently worked for Castro and assassinated a Panamanian president. That they also worked for the CIA does not make the CIA responsible for the Panamanian assassination.
That CIA/intelligence assets work this way is well-known and uncontroversial. The fact that countries use hacker assets like this is the controversial part. These hackers do act independently, yet we refuse to consider this when we want to “attribute” attacks.

Attribution is political

We have far better attribution for the nPetya attacks. It was less accidental (they clearly desired to disrupt Ukraine), and the hackers were much closer to the Russian government (Russian citizens). Yet, the Trump administration isn’t fighting Russia, they are fighting North Korea, so they don’t officially attribute nPetya to Russia, but do attribute Wannacry to North Korea.
Trump is in conflict with North Korea. He is looking for ways to escalate the conflict. Attributing Wannacry helps achieve his political objectives.
That it was blatantly politics is demonstrated by the way it was released to the press. It wasn’t released in the normal way, where the administration can stand behind it, and get challenged on the particulars. Instead, it was pre-released through the normal system of “anonymous government officials” to the NYTimes, and then backed up with op-ed in the Wall Street Journal. The government leaks information like this when it’s weak, not when its strong.

The proper way is to release the evidence upon which the decision was made, so that the public can challenge it. Among the questions the public would ask is whether it they believe it was North Korea’s intention to cause precisely this effect, such as disabling the British NHS. Or, whether it was merely hackers “affiliated” with North Korea, or hackers carrying out North Korea’s orders. We cannot challenge the government this way because the government intentionally holds itself above such accountability.

Conclusion

We believe hacking groups tied to North Korea are responsible for Wannacry. Yet, even if that’s true, we still have three attribution problems. We still don’t know if that was intentional, in pursuit of some political goal, or an accident. We still don’t know if it was at the direction of North Korea, or whether their hacker assets acted independently. We still don’t know if the government has answers to these questions, or whether it’s exploiting this doubt to achieve political support for actions against North Korea.

Some notes on Trump’s cybersecurity Executive Order

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/some-notes-on-trumps-cybersecurity.html

President Trump has finally signed an executive order on “cybersecurity”. The first draft during his first weeks in power were hilariously ignorant. The current draft, though, is pretty reasonable as such things go. I’m just reading the plain language of the draft as a cybersecurity expert, picking out the bits that interest me. In reality, there’s probably all sorts of politics in the background that I’m missing, so I may be wildly off-base.

Holding managers accountable

This is a great idea in theory. But government heads are rarely accountable for anything, so it’s hard to see if they’ll have the nerve to implement this in practice. When the next breech happens, we’ll see if anybody gets fired.
“antiquated and difficult to defend Information Technology”

The government uses laughably old computers sometimes. Forces in government wants to upgrade them. This won’t work. Instead of replacing old computers, the budget will simply be used to add new computers. The old computers will still stick around.
“Legacy” is a problem that money can’t solve. Programmers know how to build small things, but not big things. Everything starts out small, then becomes big gradually over time through constant small additions. What you have now is big legacy systems. Attempts to replace a big system with a built-from-scratch big system will fail, because engineers don’t know how to build big systems. This will suck down any amount of budget you have with failed multi-million dollar projects.
It’s not the antiquated systems that are usually the problem, but more modern systems. Antiquated systems can usually be protected by simply sticking a firewall or proxy in front of them.

“address immediate unmet budgetary needs necessary to manage risk”

Nobody cares about cybersecurity. Instead, it’s a thing people exploit in order to increase their budget. Instead of doing the best security with the budget they have, they insist they can’t secure the network without more money.

An alternate way to address gaps in cybersecurity is instead to do less. Reduce exposure to the web, provide fewer services, reduce functionality of desktop computers, and so on. Insisting that more money is the only way to address unmet needs is the strategy of the incompetent.

Use the NIST framework
Probably the biggest thing in the EO is that it forces everyone to use the NIST cybersecurity framework.
The NIST Framework simply documents all the things that organizations commonly do to secure themselves, such run intrusion-detection systems or impose rules for good passwords.
There are two problems with the NIST Framework. The first is that no organization does all the things listed. The second is that many organizations don’t do the things well.
Password rules are a good example. Organizations typically had bad rules, such as frequent changes and complexity standards. So the NIST Framework documented them. But cybersecurity experts have long opposed those complex rules, so have been fighting NIST on them.

Another good example is intrusion-detection. These days, I scan the entire Internet, setting off everyone’s intrusion-detection systems. I can see first hand that they are doing intrusion-detection wrong. But the NIST Framework recommends they do it, because many organizations do it, but the NIST Framework doesn’t demand they do it well.
When this EO forces everyone to follow the NIST Framework, then, it’s likely just going to increase the amount of money spent on cybersecurity without increasing effectiveness. That’s not necessarily a bad thing: while probably ineffective or counterproductive in the short run, there might be long-term benefit aligning everyone to thinking about the problem the same way.
Note that “following” the NIST Framework doesn’t mean “doing” everything. Instead, it means documented how you do everything, a reason why you aren’t doing anything, or (most often) your plan to eventually do the thing.
preference for shared IT services for email, cloud, and cybersecurity
Different departments are hostile toward each other, with each doing things their own way. Obviously, the thinking goes, that if more departments shared resources, they could cut costs with economies of scale. Also obviously, it’ll stop the many home-grown wrong solutions that individual departments come up with.
In other words, there should be a single government GMail-type service that does e-mail both securely and reliably.
But it won’t turn out this way. Government does not have “economies of scale” but “incompetence at scale”. It means a single GMail-like service that is expensive, unreliable, and in the end, probably insecure. It means we can look forward to government breaches that instead of affecting one department affecting all departments.

Yes, you can point to individual organizations that do things poorly, but what you are ignoring is the organizations that do it well. When you make them all share a solution, it’s going to be the average of all these things — meaning those who do something well are going to move to a worse solution.

I suppose this was inserted in there so that big government cybersecurity companies can now walk into agencies, point to where they are deficient on the NIST Framework, and say “sign here to do this with our shared cybersecurity service”.
“identify authorities and capabilities that agencies could employ to support the cybersecurity efforts of critical infrastructure entities”
What this means is “how can we help secure the power grid?”.
What it means in practice is that fiasco in the Vermont power grid. The DHS produced a report containing IoCs (“indicators of compromise”) of Russian hackers in the DNC hack. Among the things it identified was that the hackers used Yahoo! email. They pushed these IoCs out as signatures in their “Einstein” intrusion-detection system located at many power grid locations. The next person that logged into their Yahoo! email was then flagged as a Russian hacker, causing all sorts of hilarity to ensue, such as still uncorrected stories by the Washington Post how the Russians hacked our power-grid.
The upshot is that federal government help is also going to include much government hindrance. They really are this stupid sometimes and there is no way to fix this stupid. (Seriously, the DHS still insists it did the right thing pushing out the Yahoo IoCs).
Resilience Against Botnets and Other Automated, Distributed Threats

The government wants to address botnets because it’s just the sort of problem they love, mass outages across the entire Internet caused by a million machines.

But frankly, botnets don’t even make the top 10 list of problems they should be addressing. Number #1 is clearly “phishing” — you know, the attack that’s been getting into the DNC and Podesta e-mails, influencing the election. You know, the attack that Gizmodo recently showed the Trump administration is partially vulnerable to. You know, the attack that most people blame as what probably led to that huge OPM hack. Replace the entire Executive Order with “stop phishing”, and you’d go further fixing federal government security.

But solving phishing is tough. To begin with, it requires a rethink how the government does email, and how how desktop systems should be managed. So the government avoids complex problems it can’t understand to focus on the simple things it can — botnets.

Dealing with “prolonged power outage associated with a significant cyber incident”

The government has had the hots for this since 2001, even though there’s really been no attack on the American grid. After the Russian attacks against the Ukraine power grid, the issue is heating up.

Nation-wide attacks aren’t really a threat, yet, in America. We have 10,000 different companies involved with different systems throughout the country. Trying to hack them all at once is unlikely. What’s funny is that it’s the government’s attempts to standardize everything that’s likely to be our downfall, such as sticking Einstein sensors everywhere.

What they should be doing is instead of trying to make the grid unhackable, they should be trying to lessen the reliance upon the grid. They should be encouraging things like Tesla PowerWalls, solar panels on roofs, backup generators, and so on. Indeed, rather than industrial system blackout, industry backup power generation should be considered as a source of grid backup. Factories and even ships were used to supplant the electric power grid in Japan after the 2011 tsunami, for example. The less we rely on the grid, the less a blackout will hurt us.

“cybersecurity risks facing the defense industrial base, including its supply chain”

So “supply chain” cybersecurity is increasingly becoming a thing. Almost anything electronic comes with millions of lines of code, silicon chips, and other things that affect the security of the system. In this context, they may be worried about intentional subversion of systems, such as that recent article worried about Kaspersky anti-virus in government systems. However, the bigger concern is the zillions of accidental vulnerabilities waiting to be discovered. It’s impractical for a vendor to secure a product, because it’s built from so many components the vendor doesn’t understand.

“strategic options for deterring adversaries and better protecting the American people from cyber threats”

Deterrence is a funny word.

Rumor has it that we forced China to backoff on hacking by impressing them with our own hacking ability, such as reaching into China and blowing stuff up. This works because the Chinese governments remains in power because things are going well in China. If there’s a hiccup in economic growth, there will be mass actions against the government.

But for our other cyber adversaries (Russian, Iran, North Korea), things already suck in their countries. It’s hard to see how we can make things worse by hacking them. They also have a strangle hold on the media, so hacking in and publicizing their leader’s weird sex fetishes and offshore accounts isn’t going to work either.

Also, deterrence relies upon “attribution”, which is hard. While news stories claim last year’s expulsion of Russian diplomats was due to election hacking, that wasn’t the stated reason. Instead, the claimed reason was Russia’s interference with diplomats in Europe, such as breaking into diplomat’s homes and pooping on their dining room table. We know it’s them when they are brazen (as was the case with Chinese hacking), but other hacks are harder to attribute.

Deterrence of nation states ignores the reality that much of the hacking against our government comes from non-state actors. It’s not clear how much of all this Russian hacking is actually directed by the government. Deterrence polices may be better directed at individuals, such as the recent arrest of a Russian hacker while they were traveling in Spain. We can’t get Russian or Chinese hackers in their own countries, so we have to wait until they leave.

Anyway, “deterrence” is one of those real-world concepts that hard to shoe-horn into a cyber (“cyber-deterrence”) equivalent. It encourages lots of bad thinking, such as export controls on “cyber-weapons” to deter foreign countries from using them.

“educate and train the American cybersecurity workforce of the future”

The problem isn’t that we lack CISSPs. Such blanket certifications devalue the technical expertise of the real experts. The solution is to empower the technical experts we already have.

In other words, mandate that whoever is the “cyberczar” is a technical expert, like how the Surgeon General must be a medical expert, or how an economic adviser must be an economic expert. For over 15 years, we’ve had a parade of non-technical people named “cyberczar” who haven’t been experts.

Once you tell people technical expertise is valued, then by nature more students will become technical experts.

BTW, the best technical experts are software engineers and sysadmins. The best cybersecurity for Windows is already built into Windows, whose sysadmins need to be empowered to use those solutions. Instead, they are often overridden by a clueless cybersecurity consultant who insists on making the organization buy a third-party product instead that does a poorer job. We need more technical expertise in our organizations, sure, but not necessarily more cybersecurity professionals.

Conclusion

This is really a government document, and government people will be able to explain it better than I. These are just how I see it as a technical-expert who is a government-outsider.

My guess is the most lasting consequential thing will be making everyone following the NIST Framework, and the rest will just be a lot of aspirational stuff that’ll be ignored.

Who is Publishing NSA and CIA Secrets, and Why?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/who_is_publishi.html

There’s something going on inside the intelligence communities in at least two countries, and we have no idea what it is.

Consider these three data points. One: someone, probably a country’s intelligence organization, is dumping massive amounts of cyberattack tools belonging to the NSA onto the Internet. Two: someone else, or maybe the same someone, is doing the same thing to the CIA.

Three: in March, NSA Deputy Director Richard Ledgett described how the NSA penetrated the computer networks of a Russian intelligence agency and was able to monitor them as they attacked the US State Department in 2014. Even more explicitly, a US ally­ — my guess is the UK — ­was not only hacking the Russian intelligence agency’s computers, but also the surveillance cameras inside their building. “They [the US ally] monitored the [Russian] hackers as they maneuvered inside the U.S. systems and as they walked in and out of the workspace, and were able to see faces, the officials said.”

Countries don’t often reveal intelligence capabilities: “sources and methods.” Because it gives their adversaries important information about what to fix, it’s a deliberate decision done with good reason. And it’s not just the target country who learns from a reveal. When the US announces that it can see through the cameras inside the buildings of Russia’s cyber warriors, other countries immediately check the security of their own cameras.

With all this in mind, let’s talk about the recent leaks at NSA and the CIA.

Last year, a previously unknown group called the Shadow Brokers started releasing NSA hacking tools and documents from about three years ago. They continued to do so this year — ­five sets of files in all­ — and have implied that more classified documents are to come. We don’t know how they got the files. When the Shadow Brokers first emerged, the general consensus was that someone had found and hacked an external NSA staging server. These are third-party computers that the NSA’s TAO hackers use to launch attacks from. Those servers are necessarily stocked with TAO attack tools. This matched the leaks, which included a “script” directory and working attack notes. We’re not sure if someone inside the NSA made a mistake that left these files exposed, or if the hackers that found the cache got lucky.

That explanation stopped making sense after the latest Shadow Brokers release, which included attack tools against Windows, PowerPoint presentations, and operational notes — ­documents that are definitely not going to be on an external NSA staging server. A credible theory, which I first heard from Nicholas Weaver, is that the Shadow Brokers are publishing NSA data from multiple sources. The first leaks were from an external staging server, but the more recent leaks are from inside the NSA itself.

So what happened? Did someone inside the NSA accidentally mount the wrong server on some external network? That’s possible, but seems very unlikely. Did someone hack the NSA itself? Could there be a mole inside the NSA, as Kevin Poulsen speculated?

If it is a mole, my guess is that he’s already been arrested. There are enough individualities in the files to pinpoint exactly where and when they came from. Surely the NSA knows who could have taken the files. No country would burn a mole working for it by publishing what he delivered. Intelligence agencies know that if they betray a source this severely, they’ll never get another one.

That points to two options. The first is that the files came from Hal Martin. He’s the NSA contractor who was arrested in August for hoarding agency secrets in his house for two years. He can’t be the publisher, because the Shadow Brokers are in business even though he is in prison. But maybe the leaker got the documents from his stash: either because Martin gave the documents to them or because he himself was hacked. The dates line up, so it’s theoretically possible, but the contents of the documents speak to someone with a different sort of access. There’s also nothing in the public indictment against Martin that speaks to his selling secrets to a foreign power, and I think it’s exactly the sort of thing that the NSA would leak. But maybe I’m wrong about all of this; Occam’s Razor suggests that it’s him.

The other option is a mysterious second NSA leak of cyberattack tools. The only thing I have ever heard about this is from a Washington Post story about Martin: “But there was a second, previously undisclosed breach of cybertools, discovered in the summer of 2015, which was also carried out by a TAO employee, one official said. That individual also has been arrested, but his case has not been made public. The individual is not thought to have shared the material with another country, the official said.” But “not thought to have” is not the same as not having done so.

On the other hand, it’s possible that someone penetrated the internal NSA network. We’ve already seen NSA tools that can do that kind of thing to other networks. That would be huge, and explain why there were calls to fire NSA Director Mike Rogers last year.

The CIA leak is both similar and different. It consists of a series of attack tools from about a year ago. The most educated guess amongst people who know stuff is that the data is from an almost-certainly air-gapped internal development wiki­a Confluence server­ — and either someone on the inside was somehow coerced into giving up a copy of it, or someone on the outside hacked into the CIA and got themselves a copy. They turned the documents over to WikiLeaks, which continues to publish it.

This is also a really big deal, and hugely damaging for the CIA. Those tools were new, and they’re impressive. I have been told that the CIA is desperately trying to hire coders to replace what was lost.

For both of these leaks, one big question is attribution: who did this? A whistleblower wouldn’t sit on attack tools for years before publishing. A whistleblower would act more like Snowden or Manning, publishing immediately — ­and publishing documents that discuss what the US is doing to whom, not simply a bunch of attack tools. It just doesn’t make sense. Neither does random hackers. Or cybercriminals. I think it’s being done by a country or countries.

My guess was, and is still, Russia in both cases. Here’s my reasoning. Whoever got this information years before and is leaking it now has to 1) be capable of hacking the NSA and/or the CIA, and 2) willing to publish it all. Countries like Israel and France are certainly capable, but wouldn’t ever publish. Countries like North Korea or Iran probably aren’t capable. The list of countries who fit both criteria is small: Russia, China, and…and…and I’m out of ideas. And China is currently trying to make nice with the US.

Last August, Edward Snowden guessed Russia, too.

So Russia — ­or someone else­ — steals these secrets, and presumably uses them to both defend its own networks and hack other countries while deflecting blame for a couple of years. For it to publish now means that the intelligence value of the information is now lower than the embarrassment value to the NSA and CIA. This could be because the US figured out that its tools were hacked, and maybe even by whom; which would make the tools less valuable against US government targets, although still valuable against third parties.

The message that comes with publishing seems clear to me: “We are so deep into your business that we don’t care if we burn these few-years-old capabilities, as well as the fact that we have them. There’s just nothing you can do about it.” It’s bragging.

Which is exactly the same thing Ledgett is doing to the Russians. Maybe the capabilities he talked about are long gone, so there’s nothing lost in exposing sources and methods. Or maybe he too is bragging: saying to the Russians that he doesn’t care if they know. He’s certainly bragging to every other country that is paying attention to his remarks. (He may be bluffing, of course, hoping to convince others that the US has intelligence capabilities it doesn’t.)

What happens when intelligence agencies go to war with each other and don’t tell the rest of us? I think there’s something going on between the US and Russia that the public is just seeing pieces of. We have no idea why, or where it will go next, and can only speculate.

This essay previously appeared on Lawfare.com.

Why GPL Compliance Education Materials Should Be Free as in Freedom

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2017/04/25/liberate-compliance-tutorials.html

[ This blog was crossposted
on Software Freedom Conservancy’s website
. ]

I am honored to be a co-author and editor-in-chief of the most
comprehensive, detailed, and complete guide on matters related to compliance
of copyleft software licenses such as the GPL.
This book, Copyleft and the GNU
General Public License: A Comprehensive Tutorial and Guide
(which we
often call the Copyleft Guide for short)
is 155 pages filled
with useful material to help everyone understand copyleft licenses for
software, how they work, and how to comply with them properly. It is the
only document to fully incorporate esoteric material such as the FSF’s famous
GPLv3 rationale documents directly alongside practical advice, such as
the pristine example,
which is the only freely published compliance analysis of a real product on
the market. The document explains in great detail how that product
manufacturer made good choices to comply with the GPL. The reader learns by
both real-world example as well as abstract explanation.

However, the most important fact about the Copyleft Guide is not its
useful and engaging content. More importantly, the license of this book
gives freedom to its readers in the same way the license of the copylefted
software does. Specifically, we chose
the Creative
Commons Attribution Share-Alike 4.0 license

(CC BY-SA)
for this work. We believe that not just software, but any generally useful
technical information that teaches people should be freely sharable and
modifiable by the general public.

The reasons these freedoms are necessary seem so obvious that I’m
surprised I need to state them. Companies who want to build internal
training courses on copyleft compliance for their employees need to modify
the materials for that purpose. They then need to be able to freely
distribute them to employees and contractors for maximum effect.
Furthermore, like all documents and software alike, there are always
“bugs”, which (in the case of written prose) usually means
there are sections that are fail to communicate to maximum effect. Those
who find better ways to express the ideas need the ability to propose
patches and write improvements. Perhaps most importantly, everyone who
teaches should avoid
NIH syndrome. Education and
science work best when we borrow and share (with proper license-compliant
attribution, of course!) the best material that others develop, and augment
our works by incorporating them.

These reasons are akin to those that led Richard M. Stallman to write his
seminal
essay, Why
Software Should Be Free
. Indeed, if you reread that essay now
— as I just did — you’ll see that much of damage and many of
the same problems to the advancement of software that RMS documents in that
essay also occur in the world of tutorial documentation about FLOSS
licensing. As too often happens in the Open Source community, though,
folks seek ways to proprietarize, for profit, any copyrighted work that
doesn’t already have a copyleft license attached. In the field of copyleft
compliance education, we see the same behavior: organizations who wish to
control the dialogue and profit from selling compliance education seek to
proprietarize the meta-material of compliance education, rather than
sharing freely like the software itself. This yields an ironic
exploitation, since the copyleft license documented therein exists as a
strategy to assure the freedom to share knowledge. These educators tell
their audiences with a straight face: Sure, the software is
free as in freedom, but if you want to learn how its license
works, you have to license our proprietary materials!
This behavior
uses legal controls to curtail the sharing of knowledge, limits the
advancement and improvement of those tutorials, and emboldens silos of
know-how that only wealthy corporations have the resources to access and
afford. The educational dystopia that these organizations create is
precisely what I sought to prevent by advocating for software freedom for
so long.

While Conservancy’s primary job
provides non-profit infrastructure for Free
Software projects
, we also do a bit
of license compliance work as well.
But we practice what we preach: we release all the educational materials
that we produce as part of
the Copyleft Guide project
under CC BY-SA. Other Open Source organizations are currently hypocrites
on this point; they tout the values of openness and sharing of knowledge
through software, but they take their tutorial materials and lock them up
under proprietary licenses. I hereby publicly call on such organizations
(including but not limited to the Linux Foundation) to license
materials such
as
those under CC BY-SA.

I did not make this public call for liberation of such materials without
first trying friendly diplomacy first. Conservancy has been in talks with
individuals and staff who produce these materials for some time. We urged
them to join the Free Software community and share their materials under
free licenses. We even offered volunteer time to help them improve those
materials if they would simply license them freely. After two years of
that effort, it’s now abundantly clear that public pressure is the only
force that might work0. Ultimately, like all
proprietary businesses, the training divisions of Linux Foundation and
other entities in the compliance industrial complex (such
as Black Duck)
realize they can make much more revenue by making materials proprietary and
choosing legal restrictions that forbid their students from sharing and
improving the materials after they complete the course. While the reality
of this impasse regarding freely licensing these materials is probably an
obvious outcome, multiple sources inside these organizations have also
confirmed for me that liberation of the materials for the good of general
public won’t happen without a major paradigm shift — specifically
because such educational freedom will reduce the revenue stream around
those materials.

Of course, I can attest first-hand that freely liberating tutorial
materials curtails revenue. Karen Sandler and I have regularly taught
courses on copyleft licensing based
on the freely available materials
for a few years — most
recently in
January 2017 at LinuxConf Australia
and at
at
OSCON in a few weeks
. These conferences do kindly cover our travel
expenses to attend and teach the tutorial, but compliance education is not
a revenue stream for Conservancy. While, in an ideal world, we’d get
revenue from education to fund our other important activities, we believe
that there is value in doing this education as currently funded by
our individual Supporters; these education
efforts fit withour charitable mission to promote the public good. We
furthermore don’t believe that locking up the materials and refusing to
share them with others fits a mission of software freedom, so we never
considered such as a viable option. Finally, given the
institutionally-backed
FUD that we’ve
continue to witness, we seek to draw specific attention to the fundamental
difference in approach that Conservancy (as a charity) take toward this
compliance education work. (My
my recent talk on compliance
covered on LWN
includes some points on that matter, if you’d like
further reading).


0One notable exception to
these efforts was the success of my colleague, Karen Sandler (and others)
in convincing the OpenChain
project
to choose CC-0 licensing. However, OpenChain is not officially
part of the LF training curriculum to my knowledge, and if it is, it can of
course be proprietarized therein, since CC-0 is not a copyleft license.

Incident Response as "Hand-to-Hand Combat"

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/incident_respon_1.html

NSA Deputy Director Richard Ledgett described a 2014 Russian cyberattack against the US State Department as “hand-to-hand” combat:

“It was hand-to-hand combat,” said NSA Deputy Director Richard Ledgett, who described the incident at a recent cyber forum, but did not name the nation behind it. The culprit was identified by other current and former officials. Ledgett said the attackers’ thrust-and-parry moves inside the network while defenders were trying to kick them out amounted to “a new level of interaction between a cyber attacker and a defender.”

[…]

Fortunately, Ledgett said, the NSA, whose hackers penetrate foreign adversaries’ systems to glean intelligence, was able to spy on the attackers’ tools and tactics. “So we were able to see them teeing up new things to do,” Ledgett said. “That’s a really useful capability to have.”

I think this is the first public admission that we spy on foreign governments’ cyberwarriors for defensive purposes. He’s right: being able to spy on the attackers’ networks and see what they’re doing before they do it is a very useful capability. It’s something that was first exposed by the Snowden documents: that the NSA spies on enemy networks for defensive purposes.

Interesting is that another country first found out about the intrusion, and that they also have offensive capabilities inside Russia’s cyberattack units:

The NSA was alerted to the compromises by a Western intelligence agency. The ally had managed to hack not only the Russians’ computers, but also the surveillance cameras inside their workspace, according to the former officials. They monitored the hackers as they maneuvered inside the U.S. systems and as they walked in and out of the workspace, and were able to see faces, the officials said.

There’s a myth that it’s hard for the US to attribute these sorts of cyberattacks. It used to be, but for the US — and other countries with this kind of intelligence gathering capabilities — attribution is not hard. It’s not fast, which is its own problem, and of course it’s not perfect: but it’s not hard.

The CIA’s "Development Tradecraft DOs and DON’Ts"

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/the_cias_develo.html

Useful best practices for malware writers, courtesy of the CIA. Seems like a lot of good advice.

General:

  • DO obfuscate or encrypt all strings and configuration data that directly relate to tool functionality. Consideration should be made to also only de-obfuscating strings in-memory at the moment the data is needed. When a previously de-obfuscated value is no longer needed, it should be wiped from memory.

    Rationale: String data and/or configuration data is very useful to analysts and reverse-engineers.

  • DO NOT decrypt or de-obfuscate all string data or configuration data immediately upon execution.

    Rationale: Raises the difficulty for automated dynamic analysis of the binary to find sensitive data.

  • DO explicitly remove sensitive data (encryption keys, raw collection data, shellcode, uploaded modules, etc) from memory as soon as the data is no longer needed in plain-text form. DO NOT RELY ON THE OPERATING SYSTEM TO DO THIS UPON TERMINATION OF EXECUTION.

    Rationale: Raises the difficulty for incident response and forensics review.

  • DO utilize a deployment-time unique key for obfuscation/de-obfuscation of sensitive strings and configuration data.

    Rationale: Raises the difficulty of analysis of multiple deployments of the same tool.

  • DO strip all debug symbol information, manifests(MSVC artifact), build paths, developer usernames from the final build of a binary.

    Rationale: Raises the difficulty for analysis and reverse-engineering, and removes artifacts used for attribution/origination.

  • DO strip all debugging output (e.g. calls to printf(), OutputDebugString(), etc) from the final build of a tool.

    Rationale: Raises the difficulty for analysis and reverse-engineering.

  • DO NOT explicitly import/call functions that is not consistent with a tool’s overt functionality (i.e. WriteProcessMemory, VirtualAlloc, CreateRemoteThread, etc – for binary that is supposed to be a notepad replacement).

    Rationale: Lowers potential scrutiny of binary and slightly raises the difficulty for static analysis and reverse-engineering.

  • DO NOT export sensitive function names; if having exports are required for the binary, utilize an ordinal or a benign function name.

    Rationale: Raises the difficulty for analysis and reverse-engineering.

  • DO NOT generate crashdump files, coredump files, “Blue” screens, Dr Watson or other dialog pop-ups and/or other artifacts in the event of a program crash. DO attempt to force a program crash during unit testing in order to properly verify this.

    Rationale: Avoids suspicion by the end user and system admins, and raises the difficulty for incident response and reverse-engineering.

  • DO NOT perform operations that will cause the target computer to be unresponsive to the user (e.g. CPU spikes, screen flashes, screen “freezing”, etc).

    Rationale: Avoids unwanted attention from the user or system administrator to tool’s existence and behavior.

  • DO make all reasonable efforts to minimize binary file size for all binaries that will be uploaded to a remote target (without the use of packers or compression). Ideal binary file sizes should be under 150KB for a fully featured tool.

    Rationale: Shortens overall “time on air” not only to get the tool on target, but to time to execute functionality and clean-up.

  • DO provide a means to completely “uninstall”/”remove” implants, function hooks, injected threads, dropped files, registry keys, services, forked processes, etc whenever possible. Explicitly document (even if the documentation is “There is no uninstall for this “) the procedures, permissions required and side effects of removal.

    Rationale: Avoids unwanted data left on target. Also, proper documentation allows operators to make better operational risk assessment and fully understand the implications of using a tool or specific feature of a tool.

  • DO NOT leave dates/times such as compile timestamps, linker timestamps, build times, access times, etc. that correlate to general US core working hours (i.e. 8am-6pm Eastern time)

    Rationale: Avoids direct correlation to origination in the United States.

  • DO NOT leave data in a binary file that demonstrates CIA, USG, or its witting partner companies involvement in the creation or use of the binary/tool.

    Rationale: Attribution of binary/tool/etc by an adversary can cause irreversible impacts to past, present and future USG operations and equities.

  • DO NOT have data that contains CIA and USG cover terms, compartments, operation code names or other CIA and USG specific terminology in the binary.

    Rationale: Attribution of binary/tool/etc by an adversary can cause irreversible impacts to past, present and future USG operations and equities.

  • DO NOT have “dirty words” (see dirty word list – TBD) in the binary.

    Rationale: Dirty words, such as hacker terms, may cause unwarranted scrutiny of the binary file in question.

Networking:

  • DO use end-to-end encryption for all network communications. NEVER use networking protocols which break the end-to-end principle with respect to encryption of payloads.

    Rationale: Stifles network traffic analysis and avoids exposing operational/collection data.

  • DO NOT solely rely on SSL/TLS to secure data in transit.

    Rationale: Numerous man-in-middle attack vectors and publicly disclosed flaws in the protocol.

  • DO NOT allow network traffic, such as C2 packets, to be re-playable.

    Rationale: Protects the integrity of operational equities.

  • DO use ITEF RFC compliant network protocols as a blending layer. The actual data, which must be encrypted in transit across the network, should be tunneled through a well known and standardized protocol (e.g. HTTPS)

    Rationale: Custom protocols can stand-out to network analysts and IDS filters.

  • DO NOT break compliance of an RFC protocol that is being used as a blending layer. (i.e. Wireshark should not flag the traffic as being broken or mangled)

    Rationale: Broken network protocols can easily stand-out in IDS filters and network analysis.

  • DO use variable size and timing (aka jitter) of beacons/network communications. DO NOT predicatively send packets with a fixed size and timing.

    Rationale: Raises the difficulty of network analysis and correlation of network activity.

  • DO proper cleanup of network connections. DO NOT leave around stale network connections.

    Rationale: Raises the difficulty of network analysis and incident response.

Disk I/O:

  • DO explicitly document the “disk forensic footprint” that could be potentially created by various features of a binary/tool on a remote target.

    Rationale: Enables better operational risk assessments with knowledge of potential file system forensic artifacts.

  • DO NOT read, write and/or cache data to disk unnecessarily. Be cognizant of 3rd party code that may implicitly write/cache data to disk.

    Rationale: Lowers potential for forensic artifacts and potential signatures.

  • DO NOT write plain-text collection data to disk.

    Rationale: Raises difficulty of incident response and forensic analysis.

  • DO encrypt all data written to disk.

    Rationale: Disguises intent of file (collection, sensitive code, etc) and raises difficulty of forensic analysis and incident response.

  • DO utilize a secure erase when removing a file from disk that wipes at a minimum the file’s filename, datetime stamps (create, modify and access) and its content. (Note: The definition of “secure erase” varies from filesystem to filesystem, but at least a single pass of zeros of the data should be performed. The emphasis here is on removing all filesystem artifacts that could be useful during forensic analysis)

    Rationale: Raises difficulty of incident response and forensic analysis.

  • DO NOT perform Disk I/O operations that will cause the system to become unresponsive to the user or alerting to a System Administrator.

    Rationale: Avoids unwanted attention from the user or system administrator to tool’s existence and behavior.

  • DO NOT use a “magic header/footer” for encrypted files written to disk. All encrypted files should be completely opaque data files.

    Rationale: Avoids signature of custom file format’s magic values.

  • DO NOT use hard-coded filenames or filepaths when writing files to disk. This must be configurable at deployment time by the operator.

    Rationale: Allows operator to choose the proper filename that fits with in the operational target.

  • DO have a configurable maximum size limit and/or output file count for writing encrypted output files.

    Rationale: Avoids situations where a collection task can get out of control and fills the target’s disk; which will draw unwanted attention to the tool and/or the operation.

Dates/Time:

  • DO use GMT/UTC/Zulu as the time zone when comparing date/time.

    Rationale: Provides consistent behavior and helps ensure “triggers/beacons/etc” fire when expected.

  • DO NOT use US-centric timestamp formats such as MM-DD-YYYY. YYYYMMDD is generally preferred.

    Rationale: Maintains consistency across tools, and avoids associations with the United States.

PSP/AV:

  • DO NOT assume a “free” PSP product is the same as a “retail” copy. Test on all SKUs where possible.

    Rationale: While the PSP/AV product may come from the same vendor and appear to have the same features despite having different SKUs, they are not. Test on all SKUs where possible.

  • DO test PSPs with live (or recently live) internet connection where possible. NOTE: This can be a risk vs gain balance that requires careful consideration and should not be haphazardly done with in-development software. It is well known that PSP/AV products with a live internet connection can and do upload samples software based varying criteria.

    Rationale: PSP/AV products exhibit significant differences in behavior and detection when connected to the internet vise not.

Encryption: NOD publishes a Cryptography standard: “NOD Cryptographic Requirements v1.1 TOP SECRET.pdf“. Besides the guidance provided here, the requirements in that document should also be met.

The crypto requirements are complex and interesting. I’ll save commenting on them for another post.

News article.

A note about "false flag" operations

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/03/a-note-about-false-flag-operations.html

There’s nothing in the CIA #Vault7 leaks that calls into question strong attribution, like Russia being responsible for the DNC hacks. On the other hand, it does call into question weak attribution, like North Korea being responsible for the Sony hacks.

There are really two types of attribution. Strong attribution is a preponderance of evidence that would convince an unbiased, skeptical expert. Weak attribution is flimsy evidence that confirms what people are predisposed to believe.

The DNC hacks have strong evidence pointing to Russia. Not only does all the malware check out, but also other, harder to “false flag” bits, like active command-and-control servers. A serious operator could still false-flag this in theory, if only by bribing people in Russia, but nothing in the CIA dump hints at this.

The Sony hacks have weak evidence pointing to North Korea. One of the items was the use of the RawDisk driver, used both in malware attributed to North Korea and the Sony attacks. This was described as “flimsy” at the time [*]. The CIA dump [*] demonstrates that indeed it’s flimsy — as apparently CIA malware also uses the RawDisk code.

In the coming days, biased partisans are going to seize on the CIA leaks as proof of “false flag” operations, calling into question Russian hacks. No, this isn’t valid. We experts in the industry criticized “malware techniques” as flimsy attribution, long before the Sony attack, and long before the DNC hacks. All the CIA leaks do is prove we were right. On the other hand, the DNC hack attribution is based on more than just this, so nothing in the CIA leaks calls into question that attribution.