Tag Archives: NSA

Visiting the NSA

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/visiting_the_ns.html

Yesterday, I visited the NSA. It was Cyber Command’s birthday, but that’s not why I was there. I visited as part of the Berklett Cybersecurity Project, run out of the Berkman Klein Center and funded by the Hewlett Foundation. (BERKman hewLETT — get it? We have a web page, but it’s badly out of date.)

It was a full day of meetings, all unclassified but under the Chatham House Rule. Gen. Nakasone welcomed us and took questions at the start. Various senior officials spoke with us on a variety of topics, but mostly focused on three areas:

  • Russian influence operations, both what the NSA and US Cyber Command did during the 2018 election and what they can do in the future;
  • China and the threats to critical infrastructure from untrusted computer hardware, both the 5G network and more broadly;

  • Machine learning, both how to ensure a ML system is compliant with all laws, and how ML can help with other compliance tasks.

It was all interesting. Those first two topics are ones that I am thinking and writing about, and it was good to hear their perspective. I find that I am much more closely aligned with the NSA about cybersecurity than I am about privacy, which made the meeting much less fraught than it would have been if we were discussing Section 702 of the FISA Amendments Act, Section 215 the USA Freedom Act (up for renewal next year), or any 4th Amendment violations. I don’t think we’re past those issues by any means, but they make up less of what I am working on.

Cryptanalysis of SIMON-32/64

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/cryptanalysis_o_4.html

A weird paper was posted on the Cryptology ePrint Archive (working link is via the Wayback Machine), claiming an attack against the NSA-designed cipher SIMON. You can read some commentary about it here. Basically, the authors claimed an attack so devastating that they would only publish a zero-knowledge proof of their attack. Which they didn’t. Nor did they publish anything else of interest, near as I can tell.

The paper has since been deleted from the ePrint Archive, which feels like the correct decision on someone’s part.

Another NSA Leaker Identified and Charged

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/another_nsa_lea.html

In 2015, the Intercept started publishing “The Drone Papers,” based on classified documents leaked by an unknown whistleblower. Today, someone who worked at the NSA, and then at the National Geospatial-Intelligence Agency, was charged with the crime. It is unclear how he was initially identified. It might have been this: “At the agency, prosecutors said, Mr. Hale printed 36 documents from his Top Secret computer.”

The article talks about evidence collected after he was identified and searched:

According to the indictment, in August 2014, Mr. Hale’s cellphone contact list included information for the reporter, and he possessed two thumb drives. One thumb drive contained a page marked “secret” from a classified document that Mr. Hale had printed in February 2014. Prosecutors said Mr. Hale had tried to delete the document from the thumb drive.

The other thumb drive contained Tor software and the Tails operating system, which were recommended by the reporter’s online news outlet in an article published on its website regarding how to anonymously leak documents.

Leaked NSA Hacking Tools

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/leaked_nsa_hack.html

In 2016, a hacker group calling itself the Shadow Brokers released a trove of 2013 NSA hacking tools and related documents. Most people believe it is a front for the Russian government. Since, then the vulnerabilities and tools have been used by both government and criminals, and put the NSA’s ability to secure its own cyberweapons seriously into question.

Now we have learned that the Chinese used the tools fourteen months before the Shadow Brokers released them.

Does this mean that both the Chinese and the Russians stole the same set of NSA tools? Did the Russians steal them from the Chinese, who stole them from us? Did it work the other way? I don’t think anyone has any idea. But this certainly illustrates how dangerous it is for the NSA — or US Cyber Command — to hoard zero-day vulnerabilities.

New Version of Flame Malware Discovered

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/04/new_version_of_.html

Flame was discovered in 2012, linked to Stuxnet, and believed to be American in origin. It has recently been linked to more modern malware through new analysis tools that find linkages between different software.

Seems that Flame did not disappear after it was discovered, as was previously thought. (Its controllers used a kill switch to disable and erase it.) It was rewritten and reintroduced.

Note that the article claims that Flame was believed to be Israeli in origin. That’s wrong; most people who have an opinion believe it is from the NSA.

NSA-Inspired Vulnerability Found in Huawei Laptops

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/nsa-inspired_vu.html

This is an interesting story of a serious vulnerability in a Huawei driver that Microsoft found. The vulnerability is similar in style to the NSA’s DOUBLEPULSAR that was leaked by the Shadow Brokers — believed to be the Russian government — and it’s obvious that this attack copied that technique.

What is less clear is whether the vulnerability — which has been fixed — was put into the Huwei driver accidentally or on purpose.

First Look Media Shutting Down Access to Snowden NSA Archives

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/first_look_medi.html

The Daily Beast is reporting that First Look Media — home of The Intercept and Glenn Greenwald — is shutting down access to the Snowden archives.

The Intercept was the home for Greenwald’s subset of Snowden’s NSA documents since 2014, after he parted ways with the Guardian the year before. I don’t know the details of how the archive was stored, but it was offline and well secured — and it was available to journalists for research purposes. Many stories were published based on those archives over the years, albeit fewer in recent years.

The article doesn’t say what “shutting down access” means, but my guess is that it means that First Look Media will no longer make the archive available to outside journalists, and probably not to staff journalists, either. Reading between the lines, I think they will delete what they have.

This doesn’t mean that we’re done with the documents. Glenn Greenwald tweeted:

Both Laura & I have full copies of the archives, as do others. The Intercept has given full access to multiple media orgs, reporters & researchers. I’ve been looking for the right partner — an academic institution or research facility — that has the funds to robustly publish.

I’m sure there are still stories in those NSA documents, but with many of them a decade or more old, they are increasingly history and decreasingly current events. Every capability discussed in the documents needs to be read with a “and then they had ten years to improve this” mentality.

Eventually it’ll all become public, but not before it is 100% history and 0% current events.

Gen. Nakasone on US Cyber Command

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/02/gen_nakasone_on.html

Really interesting article by and interview with Paul M. Nakasone (Commander of US Cyber Command, Director of the National Security Agency, and Chief of the Central Security Service) in the current issue of Joint Forces Quarterly. He talks about the evolving role of US Cyber Command, and its new posture of “persistent engagement” using a “cyber-persistant force.”

From the article:

We must “defend forward” in cyberspace, as we do in the physical domains. Our naval forces do not defend by staying in port, and our airpower does not remain at airfields. They patrol the seas and skies to ensure they are positioned to defend our country before our borders are crossed. The same logic applies in cyberspace. Persistent engagement of our adversaries in cyberspace cannot be successful if our actions are limited to DOD networks. To defend critical military and national interests, our forces must operate against our enemies on their virtual territory as well. Shifting from a response outlook to a persistence force that defends forward moves our cyber capabilities out of their virtual garrisons, adopting a posture that matches the cyberspace operational environment.

From the interview:

As we think about cyberspace, we should agree on a few foundational concepts. First, our nation is in constant contact with its adversaries; we’re not waiting for adversaries to come to us. Our adversaries understand this, and they are always working to improve that contact. Second, our security is challenged in cyberspace. We have to actively defend; we have to conduct reconnaissance; we have to understand where our adversary is and his capabilities; and we have to understand their intent. Third, superiority in cyberspace is temporary; we may achieve it for a period of time, but it’s ephemeral. That’s why we must operate continuously to seize and maintain the initiative in the face of persistent threats. Why do the threats persist in cyberspace? They persist because the barriers to entry are low and the capabilities are rapidly available and can be easily repurposed. Fourth, in this domain, the advantage favors those who have initiative. If we want to have an advantage in cyberspace, we have to actively work to either improve our defenses, create new accesses, or upgrade our capabilities. This is a domain that requires constant action because we’re going to get reactions from our adversary.


Persistent engagement is the concept that states we are in constant contact with our adversaries in cyberspace, and success is determined by how we enable and act. In persistent engagement, we enable other interagency partners. Whether it’s the FBI or DHS, we enable them with information or intelligence to share with elements of the CIKR [critical infrastructure and key resources] or with select private-sector companies. The recent midterm elections is an example of how we enabled our partners. As part of the Russia Small Group, USCYBERCOM and the National Security Agency [NSA] enabled the FBI and DHS to prevent interference and influence operations aimed at our political processes. Enabling our partners is two-thirds of persistent engagement. The other third rests with our ability to act — that is, how we act against our adversaries in cyberspace. Acting includes defending forward. How do we warn, how do we influence our adversaries, how do we position ourselves in case we have to achieve outcomes in the future? Acting is the concept of operating outside our borders, being outside our networks, to ensure that we understand what our adversaries are doing. If we find ourselves defending inside our own networks, we have lost the initiative and the advantage.


The concept of persistent engagement has to be teamed with “persistent presence” and “persistent innovation.” Persistent presence is what the Intelligence Community is able to provide us to better understand and track our adversaries in cyberspace. The other piece is persistent innovation. In the last couple of years, we have learned that capabilities rapidly change; accesses are tenuous; and tools, techniques, and tradecraft must evolve to keep pace with our adversaries. We rely on operational structures that are enabled with the rapid development of capabilities. Let me offer an example regarding the need for rapid change in technologies. Compare the air and cyberspace domains. Weapons like JDAMs [Joint Direct Attack Munitions] are an important armament for air operations. How long are those JDAMs good for? Perhaps 5, 10, or 15 years, some-times longer given the adversary. When we buy a capability or tool for cyberspace…we rarely get a prolonged use we can measure in years. Our capabilities rarely last 6 months, let alone 6 years. This is a big difference in two important domains of future conflict. Thus, we will need formations that have ready access to developers.

Solely from a military perspective, these are obviously the right things to be doing. From a societal perspective — from the perspective a potential arms race — I’m much less sure. I’m also worried about the singular focus on nation-state actors in an environment where capabilities diffuse so quickly. But Cyber Command’s job is not cybersecurity and resilience.

The whole thing is worth reading, regardless of whether you agree or disagree.

EDITED TO ADD (2/26): As an example US CyberCommand disrupted a Russian troll farm during the 2018 midterm elections.

Massive Ad Fraud Scheme Relied on BGP Hijacking

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/massive_ad_frau.html

This is a really interesting story of an ad fraud scheme that relied on hijacking the Border Gateway Protocol:

Members of 3ve (pronounced “eve”) used their large reservoir of trusted IP addresses to conceal a fraud that otherwise would have been easy for advertisers to detect. The scheme employed a thousand servers hosted inside data centers to impersonate real human beings who purportedly “viewed” ads that were hosted on bogus pages run by the scammers themselves­ — who then received a check from ad networks for these billions of fake ad impressions. Normally, a scam of this magnitude coming from such a small pool of server-hosted bots would have stuck out to defrauded advertisers. To camouflage the scam, 3ve operators funneled the servers’ fraudulent page requests through millions of compromised IP addresses.

About one million of those IP addresses belonged to computers, primarily based in the US and the UK, that attackers had infected with botnet software strains known as Boaxxe and Kovter. But at the scale employed by 3ve, not even that number of IP addresses was enough. And that’s where the BGP hijacking came in. The hijacking gave 3ve a nearly limitless supply of high-value IP addresses. Combined with the botnets, the ruse made it seem like millions of real people from some of the most affluent parts of the world were viewing the ads.

Lots of details in the article.

An aphorism I often use in my talks is “expertise flows downhill: today’s top-secret NSA programs become tomorrow’s PhD theses and the next day’s hacking tools.” This is an example of that. BGP hacking — known as “traffic shaping” inside the NSA — has long been a tool of national intelligence agencies. Now it is being used by cybercriminals.

EDITED TO ADD (1/2): Classified NSA presentation on “network shaping.” I don’t know if there is a difference inside the NSA between the two terms.

The PCLOB Needs a Director

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/the_pclob_needs.html

The US Privacy and Civil Liberties Oversight Board is looking for a director. Among other things, this board has some oversight role over the NSA. More precisely, it can examine what any executive-branch agency is doing about counterterrorism. So it can examine the program of TSA watchlists, NSA anti-terrorism surveillance, and FBI counterterrorism activities.

The PCLOB was established in 2004 (when it didn’t do much), disappeared from 2007-2012, and reconstituted in 2012. It issued a major report on NSA surveillance in 2014. It has dwindled since then, having as few as one member. Last month, the Senate confirmed three new members, including Ed Felten.

So, potentially an important job if anyone out there is interested.

Conspiracy Theories around the "Presidential Alert"

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/conspiracy_theo_2.html

Noted conspiracy theorist John McAfee tweeted:

The “Presidential alerts”: they are capable of accessing the E911 chip in your phones — giving them full access to your location, microphone, camera and every function of your phone. This not a rant, this is from me, still one of the leading cybersecurity experts. Wake up people!

This is, of course, ridiculous. I don’t even know what an “E911 chip” is. And — honestly — if the NSA wanted in your phone, they would be a lot more subtle than this.

RT has picked up the story, though.

(If they just called it a “FEMA Alert,” there would be a lot less stress about the whole thing.)

NSA Attacks Against Virtual Private Networks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/09/nsa_attacks_aga.html

A 2006 document from the Snowden archives outlines successful NSA operations against “a number of “high potential” virtual private networks, including those of media organization Al Jazeera, the Iraqi military and internet service organizations, and a number of airline reservation systems.”

It’s hard to believe that many of the Snowden documents are now more than a decade old.

Flight Sim Company Threatens Reddit Mods Over “Libelous” DRM Posts

Post Syndicated from Andy original https://torrentfreak.com/flight-sim-company-threatens-reddit-mods-over-libellous-drm-posts-180604/

Earlier this year, in an effort to deal with piracy of their products, flight simulator company FlightSimLabs took drastic action by installing malware on customers’ machines.

The story began when a Reddit user reported something unusual in his download of FlightSimLabs’ A320X module. A file – test.exe – was being flagged up as a ‘Chrome Password Dump’ tool, something which rang alarm bells among flight sim fans.

As additional information was made available, the story became even more sensational. After first dodging the issue with carefully worded statements, FlightSimLabs admitted that it had installed a password dumper onto ALL users’ machines – whether they were pirates or not – in an effort to catch a particular software cracker and launch legal action.

It was an incredible story that no doubt did damage to FlightSimLabs’ reputation. But the now the company is at the center of a new storm, again centered around anti-piracy measures and again focused on Reddit.

Just before the weekend, Reddit user /u/walkday reported finding something unusual in his A320X module, the same module that caused the earlier controversy.

“The latest installer of FSLabs’ A320X puts two cmdhost.exe files under ‘system32\’ and ‘SysWOW64\’ of my Windows directory. Despite the name, they don’t open a command-line window,” he reported.

“They’re a part of the authentication because, if you remove them, the A320X won’t get loaded. Does someone here know more about cmdhost.exe? Why does FSLabs give them such a deceptive name and put them in the system folders? I hate them for polluting my system folder unless, of course, it is a dll used by different applications.”

Needless to say, the news that FSLabs were putting files into system folders named to make them look like system files was not well received.

“Hiding something named to resemble Window’s “Console Window Host” process in system folders is a huge red flag,” one user wrote.

“It’s a malware tactic used to deceive users into thinking the executable is a part of the OS, thus being trusted and not deleted. Really dodgy tactic, don’t trust it and don’t trust them,” opined another.

With a disenchanted Reddit userbase simmering away in the background, FSLabs took to Facebook with a statement to quieten down the masses.

“Over the past few hours we have become aware of rumors circulating on social media about the cmdhost file installed by the A320-X and wanted to clear up any confusion or misunderstanding,” the company wrote.

“cmdhost is part of our eSellerate infrastructure – which communicates between the eSellerate server and our product activation interface. It was designed to reduce the number of product activation issues people were having after the FSX release – which have since been resolved.”

The company noted that the file had been checked by all major anti-virus companies and everything had come back clean, which does indeed appear to be the case. Nevertheless, the critical Reddit thread remained, bemoaning the actions of a company which probably should have known better than to irritate fans after February’s debacle. In response, however, FSLabs did just that once again.

In private messages to the moderators of the /r/flightsim sub-Reddit, FSLabs’ Marketing and PR Manager Simon Kelsey suggested that the mods should do something about the thread in question or face possible legal action.

“Just a gentle reminder of Reddit’s obligations as a publisher in order to ensure that any libelous content is taken down as soon as you become aware of it,” Kelsey wrote.

Noting that FSLabs welcomes “robust fair comment and opinion”, Kelsey gave the following advice.

“The ‘cmdhost.exe’ file in question is an entirely above board part of our anti-piracy protection and has been submitted to numerous anti-virus providers in order to verify that it poses no threat. Therefore, ANY suggestion that current or future products pose any threat to users is absolutely false and libelous,” he wrote, adding:

“As we have already outlined in the past, ANY suggestion that any user’s data was compromised during the events of February is entirely false and therefore libelous.”

Noting that FSLabs would “hate for lawyers to have to get involved in this”, Kelsey advised the /r/flightsim mods to ensure that no such claims were allowed to remain on the sub-Reddit.

But after not receiving the response he would’ve liked, Kelsey wrote once again to the mods. He noted that “a number of unsubstantiated and highly defamatory comments” remained online and warned that if something wasn’t done to clean them up, he would have “no option” than to pass the matter to FSLabs’ legal team.

Like the first message, this second effort also failed to have the desired effect. In fact, the moderators’ response was to post an open letter to Kelsey and FSLabs instead.

“We sincerely disagree that you ‘welcome robust fair comment and opinion’, demonstrated by the censorship on your forums and the attempted censorship on our subreddit,” the mods wrote.

“While what you do on your forum is certainly your prerogative, your rules do not extend to Reddit nor the r/flightsim subreddit. Removing content you disagree with is simply not within our purview.”

The letter, which is worth reading in full, refutes Kelsey’s claims and also suggests that critics of FSLabs may have been subjected to Reddit vote manipulation and coordinated efforts to discredit them.

What will happen next is unclear but the matter has now been placed in the hands of Reddit’s administrators who have agreed to deal with Kelsey and FSLabs’ personally.

It’s a little early to say for sure but it seems unlikely that this will end in a net positive for FSLabs, no matter what decision Reddit’s admins take.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

C is to low level

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/05/c-is-too-low-level.html

I’m in danger of contradicting myself, after previously pointing out that x86 machine code is a high-level language, but this article claiming C is a not a low level language is bunk. C certainly has some problems, but it’s still the closest language to assembly. This is obvious by the fact it’s still the fastest compiled language. What we see is a typical academic out of touch with the real world.

The author makes the (wrong) observation that we’ve been stuck emulating the PDP-11 for the past 40 years. C was written for the PDP-11, and since then CPUs have been designed to make C run faster. The author imagines a different world, such as where CPU designers instead target something like LISP as their preferred language, or Erlang. This misunderstands the state of the market. CPUs do indeed supports lots of different abstractions, and C has evolved to accommodate this.

The author criticizes things like “out-of-order” execution which has lead to the Spectre sidechannel vulnerabilities. Out-of-order execution is necessary to make C run faster. The author claims instead that those resources should be spent on having more slower CPUs, with more threads. This sacrifices single-threaded performance in exchange for a lot more threads executing in parallel. The author cites Sparc Tx CPUs as his ideal processor.

But here’s the thing, the Sparc Tx was a failure. To be fair, it’s mostly a failure because most of the time, people wanted to run old C code instead of new Erlang code. But it was still a failure at running Erlang.

Time after time, engineers keep finding that “out-of-order”, single-threaded performance is still the winner. A good example is ARM processors for both mobile phones and servers. All the theory points to in-order CPUs as being better, but all the products are out-of-order, because this theory is wrong. The custom ARM cores from Apple and Qualcomm used in most high-end phones are so deeply out-of-order they give Intel CPUs competition. The same is true on the server front with the latest Qualcomm Centriq and Cavium ThunderX2 processors, deeply out of order supporting more than 100 instructions in flight.

The Cavium is especially telling. Its ThunderX CPU had 48 simple cores which was replaced with the ThunderX2 having 32 complex, deeply out-of-order cores. The performance increase was massive, even on multithread-friendly workloads. Every competitor to Intel’s dominance in the server space has learned the lesson from Sparc Tx: many wimpy cores is a failure, you need fewer beefy cores. Yes, they don’t need to be as beefy as Intel’s processors, but they need to be close.

Even Intel’s “Xeon Phi” custom chip learned this lesson. This is their GPU-like chip, running 60 cores with 512-bit wide “vector” (sic) instructions, designed for supercomputer applications. Its first version was purely in-order. Its current version is slightly out-of-order. It supports four threads and focuses on basic number crunching, so in-order cores seems to be the right approach, but Intel found in this case that out-of-order processing still provided a benefit. Practice is different than theory.

As an academic, the author of the above article focuses on abstractions. The criticism of C is that it has the wrong abstractions which are hard to optimize, and that if we instead expressed things in the right abstractions, it would be easier to optimize.

This is an intellectually compelling argument, but so far bunk.

The reason is that while the theoretical base language has issues, everyone programs using extensions to the language, like “intrinsics” (C ‘functions’ that map to assembly instructions). Programmers write libraries using these intrinsics, which then the rest of the normal programmers use. In other words, if your criticism is that C is not itself low level enough, it still provides the best access to low level capabilities.

Given that C can access new functionality in CPUs, CPU designers add new paradigms, from SIMD to transaction processing. In other words, while in the 1980s CPUs were designed to optimize C (stacks, scaled pointers), these days CPUs are designed to optimize tasks regardless of language.

The author of that article criticizes the memory/cache hierarchy, claiming it has problems. Yes, it has problems, but only compared to how well it normally works. The author praises the many simple cores/threads idea as hiding memory latency with little caching, but misses the point that caches also dramatically increase memory bandwidth. Intel processors are optimized to read a whopping 256 bits every clock cycle from L1 cache. Main memory bandwidth is orders of magnitude slower.

The author goes onto criticize cache coherency as a problem. C uses it, but other languages like Erlang don’t need it. But that’s largely due to the problems each languages solves. Erlang solves the problem where a large number of threads work on largely independent tasks, needing to send only small messages to each other across threads. The problems C solves is when you need many threads working on a huge, common set of data.

For example, consider the “intrusion prevention system”. Any thread can process any incoming packet that corresponds to any region of memory. There’s no practical way of solving this problem without a huge coherent cache. It doesn’t matter which language or abstractions you use, it’s the fundamental constraint of the problem being solved. RDMA is an important concept that’s moved from supercomputer applications to the data center, such as with memcached. Again, we have the problem of huge quantities (terabytes worth) shared among threads rather than small quantities (kilobytes).

The fundamental issue the author of the the paper is ignoring is decreasing marginal returns. Moore’s Law has gifted us more transistors than we can usefully use. We can’t apply those additional registers to just one thing, because the useful returns we get diminish.

For example, Intel CPUs have two hardware threads per core. That’s because there are good returns by adding a single additional thread. However, the usefulness of adding a third or fourth thread decreases. That’s why many CPUs have only two threads, or sometimes four threads, but no CPU has 16 threads per core.

You can apply the same discussion to any aspect of the CPU, from register count, to SIMD width, to cache size, to out-of-order depth, and so on. Rather than focusing on one of these things and increasing it to the extreme, CPU designers make each a bit larger every process tick that adds more transistors to the chip.

The same applies to cores. It’s why the “more simpler cores” strategy fails, because more cores have their own decreasing marginal returns. Instead of adding cores tied to limited memory bandwidth, it’s better to add more cache. Such cache already increases the size of the cores, so at some point it’s more effective to add a few out-of-order features to each core rather than more cores. And so on.

The question isn’t whether we can change this paradigm and radically redesign CPUs to match some academic’s view of the perfect abstraction. Instead, the goal is to find new uses for those additional transistors. For example, “message passing” is a useful abstraction in languages like Go and Erlang that’s often more useful than sharing memory. It’s implemented with shared memory and atomic instructions, but I can’t help but think it couldn’t better be done with direct hardware support.

Of course, as soon as they do that, it’ll become an intrinsic in C, then added to languages like Go and Erlang.


Academics live in an ideal world of abstractions, the rest of us live in practical reality. The reality is that vast majority of programmers work with the C family of languages (JavaScript, Go, etc.), whereas academics love the epiphanies they learned using other languages, especially function languages. CPUs are only superficially designed to run C and “PDP-11 compatibility”. Instead, they keep adding features to support other abstractions, abstractions available to C. They are driven by decreasing marginal returns — they would love to add new abstractions to the hardware because it’s a cheap way to make use of additional transitions. Academics are wrong believing that the entire system needs to be redesigned from scratch. Instead, they just need to come up with new abstractions CPU designers can add.

Japan’s Directorate for Signals Intelligence

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/japans_director.html

The Intercept has a long article on Japan’s equivalent of the NSA: the Directorate for Signals Intelligence. Interesting, but nothing really surprising.

The directorate has a history that dates back to the 1950s; its role is to eavesdrop on communications. But its operations remain so highly classified that the Japanese government has disclosed little about its work ­ even the location of its headquarters. Most Japanese officials, except for a select few of the prime minister’s inner circle, are kept in the dark about the directorate’s activities, which are regulated by a limited legal framework and not subject to any independent oversight.

Now, a new investigation by the Japanese broadcaster NHK — produced in collaboration with The Intercept — reveals for the first time details about the inner workings of Japan’s opaque spy community. Based on classified documents and interviews with current and former officials familiar with the agency’s intelligence work, the investigation shines light on a previously undisclosed internet surveillance program and a spy hub in the south of Japan that is used to monitor phone calls and emails passing across communications satellites.

The article includes some new documents from the Snowden archive.