Tag Archives: attribution

DMCA Used to Remove Ad Server URL From Easylist Ad Blocklist

Post Syndicated from Andy original https://torrentfreak.com/dmca-used-to-remove-ad-server-url-from-easylist-ad-blocklist-170811/

The default business model on the Internet is “free” for consumers. Users largely expect websites to load without paying a dime but of course, there’s no such thing as a free lunch. To this end, millions of websites are funded by advertising revenue.

Sensible sites ensure that any advertising displayed is unobtrusive to the visitor but lots seem to think that bombarding users with endless ads, popups, and other hindrances is the best way to do business. As a result, ad blockers are now deployed by millions of people online.

In order to function, ad-blocking tools – such as uBlock Origin or Adblock – utilize lists of advertising domains compiled by third parties. One of the most popular is Easylist, which is distributed by authors fanboy, MonztA, Famlam, and Khrinunder, under dual Creative Commons Attribution-ShareAlike and GNU General Public Licenses.

With the freedom afforded by those licenses, copyright tends not to figure high on the agenda for Easylist. However, a legal problem that has just raised its head is causing serious concern among those in the ad-blocking community.

Two days ago a somewhat unusual commit appeared in the Easylist repo on Github. As shown in the image below, a domain URL previously added to Easylist had been removed following a DMCA takedown notice filed with Github.

Domain text taken down by DMCA?

The DMCA notice in question has not yet been published but it’s clear that it targets the domain ‘functionalclam.com’. A user called ‘ameshkov’ helpfully points out a post by a new Github user called ‘DMCAHelper’ which coincided with the start of the takedown process more than three weeks ago.

A domain in a list circumvents copyright controls?

Aside from the curious claims of a URL “circumventing copyright access controls” (domains themselves cannot be copyrighted), the big questions are (i) who filed the complaint and (ii) who operates Functionalclam.com? The domain WHOIS is hidden but according to a helpful sleuth on Github, it’s operated by anti ad-blocking company Admiral.

Ad-blocking means money down the drain….

If that is indeed the case, we have the intriguing prospect of a startup attempting to protect its business model by using a novel interpretation of copyright law to have a domain name removed from a list. How this will pan out is unclear but a notice recently published on Functionalclam.com suggests the route the company wishes to take.

“This domain is used by digital publishers to control access to copyrighted content in accordance with the Digital Millenium Copyright Act and understand how visitors are accessing their copyrighted content,” the notice begins.

Combined with the comments by DMCAHelper on Github, this statement suggests that the complainants believe that interference with the ad display process (ads themselves could be the “copyrighted content” in question) represents a breach of section 1201 of the DMCA.

If it does, that could have huge consequences for online advertising but we will need to see the original DMCA notice to have a clearer idea of what this is all about. Thus far, Github hasn’t published it but already interest is growing. A representative from the EFF has already contacted the Easylist team, so this battle could heat up pretty quickly.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

RIAA Says Artists Don’t Need “Moral Rights,” Artists Disagree

Post Syndicated from Ernesto original https://torrentfreak.com/riaa-says-artists-dont-need-moral-rights-artists-disagree-170521/

Most people who create something like to be credited for their work. Whether you make a video, song, photo, or blog post, it feels ‘right’ to receive recognition.

The right to be credited is part of the so-called “moral rights,” which are baked into many copyright laws around the world, adopted at the international level through the Berne Convention.

However, in the United States, this is not the case. The US didn’t sign the Berne Convention right away and opted out from the “moral rights” provision when they eventually joined it.

Now that the U.S. Copyright Office is looking into ways to improve current copyright law, the issue has been brought to the forefront again. The Government recently launched a consultation to hear the thoughts of various stakeholders, which resulted in several noteworthy contributions.

As it turns out, both the MPAA and RIAA are against the introduction of statutory moral rights for artists. They believe that the current system works well and they fear that it’s impractical and expensive to credit all creators for their contributions.

The MPAA stresses that new moral rights may make it harder for producers to distribute their work and may violate the First Amendment rights of producers, artists, and third parties who wish to use the work of others.

In the movie industry, many employees are not credited for their work. They get paid, but can’t claim any “rights” to the products they create, something the MPAA wants to keep intact.

“Further statutory recognition of the moral rights of attribution and integrity risks upsetting this well-functioning system that has made the United States the unrivaled world leader in motion picture production for over a century,” they stress.

The RIAA has a similar view, although the central argument is somewhat different.

The US record labels say that they do everything they can to generate name recognition for their main artists. However, crediting everyone who’s involved in making a song, such as the writer, is not always a good idea.

“A new statutory attribution right, in addition to being unnecessary, would likely have significant unintended consequences,” the RIAA writes (pdf).

The RIAA explains that the music industry has weathered several dramatic shifts over the past two decades. They argue that the transition from physical to digital music – and later streaming – while being confronted with massive piracy, has taken its toll.

There are signs of improvement now, but if moral rights are extended, the RIAA fears that everything might collapse once gain.

“After fifteen years of declining revenues, the recorded music industry outlook is finally showing signs of improvement. This fragile recovery results largely from growing consumer adoption of new streaming models..,” the RIAA writes.

“We urge the Office to avoid legislative proposals that could hamper this nascent recovery by injecting significant additional risk, uncertainty, and complexity into the recorded music business.”

According to the RIAA it would be costly for streaming services credit everyone who’s involved in the creative process. In addition, they simply might not have the screen real estate to pull this off.

“If a statutory attribution right suddenly required these services to provide attribution to others involved in the creative process, that would presumably require costly changes to their user interfaces and push them up against the size limitations of their display screens.”

This means less money for the artists and more clutter on the screen, according to the music group. Music fans probably wouldn’t want to see the list of everyone who worked on a song anyway, they claim.

“To continue growing, streaming services must provide a compelling product to consumers. Providing a long list of on-screen attributions would not make for an engaging or useful experience for consumers,” RIAA writes.

The streaming example is just one of the many issues that may arise, in the eyes of the record labels. They also expect problems with tracks that are played on the radio, or in commercials, where full credits are rarely given.

Interestingly, many of the artists the RIAA claims to represent don’t agree with the group’s comments.

Music Creators North America and The Future of Music Coalition, for example, believe that artists should have statutory moral rights. The latter group argues that, currently, small artists are often powerless against large corporations.

“Moral rights would serve to alleviate the powerlessness faced by creators who often must relinquish their copyright to make a living from their work. These creators should still be provided some right of attribution and integrity as these affect a creator’s reputation and ultimately livelihood.”

The Future of Music Coalition disagrees with the paternalistic perspective that the public isn’t interested in detailed information about the creators of music.

“While interest levels may vary, a significant portion of the public has a great interest in understanding who exactly contributed to the creation works of art which they admire,” they write (pdf).

Knowing who’s involved requires attribution, so it’s crucial that this information becomes available, they argue.

“Music enthusiasts revel in the details of music they adore, but when care is not taken to document and preserve that information, those details can often lost over time and eventually unattainable.”

“To argue that the public generally has a homogenously disinterested opinion of creators is insulting both to the public and to creators,” The Future of Music Coalition adds.

The above shows that the rights of artists are clearly not always aligned with the interests of record labels.

Interestingly, the RIAA and MPAA do agree with major tech companies and civil rights groups such as EFF and Public Knowledge. These are also against new moral rights, albeit for different reasons.

It’s now up to the U.S. Copyright Office to determine if change is indeed required, or if everything will remain the same.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Some notes on Trump’s cybersecurity Executive Order

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/some-notes-on-trumps-cybersecurity.html

President Trump has finally signed an executive order on “cybersecurity”. The first draft during his first weeks in power were hilariously ignorant. The current draft, though, is pretty reasonable as such things go. I’m just reading the plain language of the draft as a cybersecurity expert, picking out the bits that interest me. In reality, there’s probably all sorts of politics in the background that I’m missing, so I may be wildly off-base.

Holding managers accountable

This is a great idea in theory. But government heads are rarely accountable for anything, so it’s hard to see if they’ll have the nerve to implement this in practice. When the next breech happens, we’ll see if anybody gets fired.
“antiquated and difficult to defend Information Technology”

The government uses laughably old computers sometimes. Forces in government wants to upgrade them. This won’t work. Instead of replacing old computers, the budget will simply be used to add new computers. The old computers will still stick around.
“Legacy” is a problem that money can’t solve. Programmers know how to build small things, but not big things. Everything starts out small, then becomes big gradually over time through constant small additions. What you have now is big legacy systems. Attempts to replace a big system with a built-from-scratch big system will fail, because engineers don’t know how to build big systems. This will suck down any amount of budget you have with failed multi-million dollar projects.
It’s not the antiquated systems that are usually the problem, but more modern systems. Antiquated systems can usually be protected by simply sticking a firewall or proxy in front of them.

“address immediate unmet budgetary needs necessary to manage risk”

Nobody cares about cybersecurity. Instead, it’s a thing people exploit in order to increase their budget. Instead of doing the best security with the budget they have, they insist they can’t secure the network without more money.

An alternate way to address gaps in cybersecurity is instead to do less. Reduce exposure to the web, provide fewer services, reduce functionality of desktop computers, and so on. Insisting that more money is the only way to address unmet needs is the strategy of the incompetent.

Use the NIST framework
Probably the biggest thing in the EO is that it forces everyone to use the NIST cybersecurity framework.
The NIST Framework simply documents all the things that organizations commonly do to secure themselves, such run intrusion-detection systems or impose rules for good passwords.
There are two problems with the NIST Framework. The first is that no organization does all the things listed. The second is that many organizations don’t do the things well.
Password rules are a good example. Organizations typically had bad rules, such as frequent changes and complexity standards. So the NIST Framework documented them. But cybersecurity experts have long opposed those complex rules, so have been fighting NIST on them.

Another good example is intrusion-detection. These days, I scan the entire Internet, setting off everyone’s intrusion-detection systems. I can see first hand that they are doing intrusion-detection wrong. But the NIST Framework recommends they do it, because many organizations do it, but the NIST Framework doesn’t demand they do it well.
When this EO forces everyone to follow the NIST Framework, then, it’s likely just going to increase the amount of money spent on cybersecurity without increasing effectiveness. That’s not necessarily a bad thing: while probably ineffective or counterproductive in the short run, there might be long-term benefit aligning everyone to thinking about the problem the same way.
Note that “following” the NIST Framework doesn’t mean “doing” everything. Instead, it means documented how you do everything, a reason why you aren’t doing anything, or (most often) your plan to eventually do the thing.
preference for shared IT services for email, cloud, and cybersecurity
Different departments are hostile toward each other, with each doing things their own way. Obviously, the thinking goes, that if more departments shared resources, they could cut costs with economies of scale. Also obviously, it’ll stop the many home-grown wrong solutions that individual departments come up with.
In other words, there should be a single government GMail-type service that does e-mail both securely and reliably.
But it won’t turn out this way. Government does not have “economies of scale” but “incompetence at scale”. It means a single GMail-like service that is expensive, unreliable, and in the end, probably insecure. It means we can look forward to government breaches that instead of affecting one department affecting all departments.

Yes, you can point to individual organizations that do things poorly, but what you are ignoring is the organizations that do it well. When you make them all share a solution, it’s going to be the average of all these things — meaning those who do something well are going to move to a worse solution.

I suppose this was inserted in there so that big government cybersecurity companies can now walk into agencies, point to where they are deficient on the NIST Framework, and say “sign here to do this with our shared cybersecurity service”.
“identify authorities and capabilities that agencies could employ to support the cybersecurity efforts of critical infrastructure entities”
What this means is “how can we help secure the power grid?”.
What it means in practice is that fiasco in the Vermont power grid. The DHS produced a report containing IoCs (“indicators of compromise”) of Russian hackers in the DNC hack. Among the things it identified was that the hackers used Yahoo! email. They pushed these IoCs out as signatures in their “Einstein” intrusion-detection system located at many power grid locations. The next person that logged into their Yahoo! email was then flagged as a Russian hacker, causing all sorts of hilarity to ensue, such as still uncorrected stories by the Washington Post how the Russians hacked our power-grid.
The upshot is that federal government help is also going to include much government hindrance. They really are this stupid sometimes and there is no way to fix this stupid. (Seriously, the DHS still insists it did the right thing pushing out the Yahoo IoCs).
Resilience Against Botnets and Other Automated, Distributed Threats

The government wants to address botnets because it’s just the sort of problem they love, mass outages across the entire Internet caused by a million machines.

But frankly, botnets don’t even make the top 10 list of problems they should be addressing. Number #1 is clearly “phishing” — you know, the attack that’s been getting into the DNC and Podesta e-mails, influencing the election. You know, the attack that Gizmodo recently showed the Trump administration is partially vulnerable to. You know, the attack that most people blame as what probably led to that huge OPM hack. Replace the entire Executive Order with “stop phishing”, and you’d go further fixing federal government security.

But solving phishing is tough. To begin with, it requires a rethink how the government does email, and how how desktop systems should be managed. So the government avoids complex problems it can’t understand to focus on the simple things it can — botnets.

Dealing with “prolonged power outage associated with a significant cyber incident”

The government has had the hots for this since 2001, even though there’s really been no attack on the American grid. After the Russian attacks against the Ukraine power grid, the issue is heating up.

Nation-wide attacks aren’t really a threat, yet, in America. We have 10,000 different companies involved with different systems throughout the country. Trying to hack them all at once is unlikely. What’s funny is that it’s the government’s attempts to standardize everything that’s likely to be our downfall, such as sticking Einstein sensors everywhere.

What they should be doing is instead of trying to make the grid unhackable, they should be trying to lessen the reliance upon the grid. They should be encouraging things like Tesla PowerWalls, solar panels on roofs, backup generators, and so on. Indeed, rather than industrial system blackout, industry backup power generation should be considered as a source of grid backup. Factories and even ships were used to supplant the electric power grid in Japan after the 2011 tsunami, for example. The less we rely on the grid, the less a blackout will hurt us.

“cybersecurity risks facing the defense industrial base, including its supply chain”

So “supply chain” cybersecurity is increasingly becoming a thing. Almost anything electronic comes with millions of lines of code, silicon chips, and other things that affect the security of the system. In this context, they may be worried about intentional subversion of systems, such as that recent article worried about Kaspersky anti-virus in government systems. However, the bigger concern is the zillions of accidental vulnerabilities waiting to be discovered. It’s impractical for a vendor to secure a product, because it’s built from so many components the vendor doesn’t understand.

“strategic options for deterring adversaries and better protecting the American people from cyber threats”

Deterrence is a funny word.

Rumor has it that we forced China to backoff on hacking by impressing them with our own hacking ability, such as reaching into China and blowing stuff up. This works because the Chinese governments remains in power because things are going well in China. If there’s a hiccup in economic growth, there will be mass actions against the government.

But for our other cyber adversaries (Russian, Iran, North Korea), things already suck in their countries. It’s hard to see how we can make things worse by hacking them. They also have a strangle hold on the media, so hacking in and publicizing their leader’s weird sex fetishes and offshore accounts isn’t going to work either.

Also, deterrence relies upon “attribution”, which is hard. While news stories claim last year’s expulsion of Russian diplomats was due to election hacking, that wasn’t the stated reason. Instead, the claimed reason was Russia’s interference with diplomats in Europe, such as breaking into diplomat’s homes and pooping on their dining room table. We know it’s them when they are brazen (as was the case with Chinese hacking), but other hacks are harder to attribute.

Deterrence of nation states ignores the reality that much of the hacking against our government comes from non-state actors. It’s not clear how much of all this Russian hacking is actually directed by the government. Deterrence polices may be better directed at individuals, such as the recent arrest of a Russian hacker while they were traveling in Spain. We can’t get Russian or Chinese hackers in their own countries, so we have to wait until they leave.

Anyway, “deterrence” is one of those real-world concepts that hard to shoe-horn into a cyber (“cyber-deterrence”) equivalent. It encourages lots of bad thinking, such as export controls on “cyber-weapons” to deter foreign countries from using them.

“educate and train the American cybersecurity workforce of the future”

The problem isn’t that we lack CISSPs. Such blanket certifications devalue the technical expertise of the real experts. The solution is to empower the technical experts we already have.

In other words, mandate that whoever is the “cyberczar” is a technical expert, like how the Surgeon General must be a medical expert, or how an economic adviser must be an economic expert. For over 15 years, we’ve had a parade of non-technical people named “cyberczar” who haven’t been experts.

Once you tell people technical expertise is valued, then by nature more students will become technical experts.

BTW, the best technical experts are software engineers and sysadmins. The best cybersecurity for Windows is already built into Windows, whose sysadmins need to be empowered to use those solutions. Instead, they are often overridden by a clueless cybersecurity consultant who insists on making the organization buy a third-party product instead that does a poorer job. We need more technical expertise in our organizations, sure, but not necessarily more cybersecurity professionals.

Conclusion

This is really a government document, and government people will be able to explain it better than I. These are just how I see it as a technical-expert who is a government-outsider.

My guess is the most lasting consequential thing will be making everyone following the NIST Framework, and the rest will just be a lot of aspirational stuff that’ll be ignored.

Who is Publishing NSA and CIA Secrets, and Why?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/who_is_publishi.html

There’s something going on inside the intelligence communities in at least two countries, and we have no idea what it is.

Consider these three data points. One: someone, probably a country’s intelligence organization, is dumping massive amounts of cyberattack tools belonging to the NSA onto the Internet. Two: someone else, or maybe the same someone, is doing the same thing to the CIA.

Three: in March, NSA Deputy Director Richard Ledgett described how the NSA penetrated the computer networks of a Russian intelligence agency and was able to monitor them as they attacked the US State Department in 2014. Even more explicitly, a US ally­ — my guess is the UK — ­was not only hacking the Russian intelligence agency’s computers, but also the surveillance cameras inside their building. “They [the US ally] monitored the [Russian] hackers as they maneuvered inside the U.S. systems and as they walked in and out of the workspace, and were able to see faces, the officials said.”

Countries don’t often reveal intelligence capabilities: “sources and methods.” Because it gives their adversaries important information about what to fix, it’s a deliberate decision done with good reason. And it’s not just the target country who learns from a reveal. When the US announces that it can see through the cameras inside the buildings of Russia’s cyber warriors, other countries immediately check the security of their own cameras.

With all this in mind, let’s talk about the recent leaks at NSA and the CIA.

Last year, a previously unknown group called the Shadow Brokers started releasing NSA hacking tools and documents from about three years ago. They continued to do so this year — ­five sets of files in all­ — and have implied that more classified documents are to come. We don’t know how they got the files. When the Shadow Brokers first emerged, the general consensus was that someone had found and hacked an external NSA staging server. These are third-party computers that the NSA’s TAO hackers use to launch attacks from. Those servers are necessarily stocked with TAO attack tools. This matched the leaks, which included a “script” directory and working attack notes. We’re not sure if someone inside the NSA made a mistake that left these files exposed, or if the hackers that found the cache got lucky.

That explanation stopped making sense after the latest Shadow Brokers release, which included attack tools against Windows, PowerPoint presentations, and operational notes — ­documents that are definitely not going to be on an external NSA staging server. A credible theory, which I first heard from Nicholas Weaver, is that the Shadow Brokers are publishing NSA data from multiple sources. The first leaks were from an external staging server, but the more recent leaks are from inside the NSA itself.

So what happened? Did someone inside the NSA accidentally mount the wrong server on some external network? That’s possible, but seems very unlikely. Did someone hack the NSA itself? Could there be a mole inside the NSA, as Kevin Poulsen speculated?

If it is a mole, my guess is that he’s already been arrested. There are enough individualities in the files to pinpoint exactly where and when they came from. Surely the NSA knows who could have taken the files. No country would burn a mole working for it by publishing what he delivered. Intelligence agencies know that if they betray a source this severely, they’ll never get another one.

That points to two options. The first is that the files came from Hal Martin. He’s the NSA contractor who was arrested in August for hoarding agency secrets in his house for two years. He can’t be the publisher, because the Shadow Brokers are in business even though he is in prison. But maybe the leaker got the documents from his stash: either because Martin gave the documents to them or because he himself was hacked. The dates line up, so it’s theoretically possible, but the contents of the documents speak to someone with a different sort of access. There’s also nothing in the public indictment against Martin that speaks to his selling secrets to a foreign power, and I think it’s exactly the sort of thing that the NSA would leak. But maybe I’m wrong about all of this; Occam’s Razor suggests that it’s him.

The other option is a mysterious second NSA leak of cyberattack tools. The only thing I have ever heard about this is from a Washington Post story about Martin: “But there was a second, previously undisclosed breach of cybertools, discovered in the summer of 2015, which was also carried out by a TAO employee, one official said. That individual also has been arrested, but his case has not been made public. The individual is not thought to have shared the material with another country, the official said.” But “not thought to have” is not the same as not having done so.

On the other hand, it’s possible that someone penetrated the internal NSA network. We’ve already seen NSA tools that can do that kind of thing to other networks. That would be huge, and explain why there were calls to fire NSA Director Mike Rogers last year.

The CIA leak is both similar and different. It consists of a series of attack tools from about a year ago. The most educated guess amongst people who know stuff is that the data is from an almost-certainly air-gapped internal development wiki­a Confluence server­ — and either someone on the inside was somehow coerced into giving up a copy of it, or someone on the outside hacked into the CIA and got themselves a copy. They turned the documents over to WikiLeaks, which continues to publish it.

This is also a really big deal, and hugely damaging for the CIA. Those tools were new, and they’re impressive. I have been told that the CIA is desperately trying to hire coders to replace what was lost.

For both of these leaks, one big question is attribution: who did this? A whistleblower wouldn’t sit on attack tools for years before publishing. A whistleblower would act more like Snowden or Manning, publishing immediately — ­and publishing documents that discuss what the US is doing to whom, not simply a bunch of attack tools. It just doesn’t make sense. Neither does random hackers. Or cybercriminals. I think it’s being done by a country or countries.

My guess was, and is still, Russia in both cases. Here’s my reasoning. Whoever got this information years before and is leaking it now has to 1) be capable of hacking the NSA and/or the CIA, and 2) willing to publish it all. Countries like Israel and France are certainly capable, but wouldn’t ever publish. Countries like North Korea or Iran probably aren’t capable. The list of countries who fit both criteria is small: Russia, China, and…and…and I’m out of ideas. And China is currently trying to make nice with the US.

Last August, Edward Snowden guessed Russia, too.

So Russia — ­or someone else­ — steals these secrets, and presumably uses them to both defend its own networks and hack other countries while deflecting blame for a couple of years. For it to publish now means that the intelligence value of the information is now lower than the embarrassment value to the NSA and CIA. This could be because the US figured out that its tools were hacked, and maybe even by whom; which would make the tools less valuable against US government targets, although still valuable against third parties.

The message that comes with publishing seems clear to me: “We are so deep into your business that we don’t care if we burn these few-years-old capabilities, as well as the fact that we have them. There’s just nothing you can do about it.” It’s bragging.

Which is exactly the same thing Ledgett is doing to the Russians. Maybe the capabilities he talked about are long gone, so there’s nothing lost in exposing sources and methods. Or maybe he too is bragging: saying to the Russians that he doesn’t care if they know. He’s certainly bragging to every other country that is paying attention to his remarks. (He may be bluffing, of course, hoping to convince others that the US has intelligence capabilities it doesn’t.)

What happens when intelligence agencies go to war with each other and don’t tell the rest of us? I think there’s something going on between the US and Russia that the public is just seeing pieces of. We have no idea why, or where it will go next, and can only speculate.

This essay previously appeared on Lawfare.com.

Why GPL Compliance Education Materials Should Be Free as in Freedom

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2017/04/25/liberate-compliance-tutorials.html

[ This blog was crossposted
on Software Freedom Conservancy’s website
. ]

I am honored to be a co-author and editor-in-chief of the most
comprehensive, detailed, and complete guide on matters related to compliance
of copyleft software licenses such as the GPL.
This book, Copyleft and the GNU
General Public License: A Comprehensive Tutorial and Guide
(which we
often call the Copyleft Guide for short)
is 155 pages filled
with useful material to help everyone understand copyleft licenses for
software, how they work, and how to comply with them properly. It is the
only document to fully incorporate esoteric material such as the FSF’s famous
GPLv3 rationale documents directly alongside practical advice, such as
the pristine example,
which is the only freely published compliance analysis of a real product on
the market. The document explains in great detail how that product
manufacturer made good choices to comply with the GPL. The reader learns by
both real-world example as well as abstract explanation.

However, the most important fact about the Copyleft Guide is not its
useful and engaging content. More importantly, the license of this book
gives freedom to its readers in the same way the license of the copylefted
software does. Specifically, we chose
the Creative
Commons Attribution Share-Alike 4.0 license

(CC BY-SA)
for this work. We believe that not just software, but any generally useful
technical information that teaches people should be freely sharable and
modifiable by the general public.

The reasons these freedoms are necessary seem so obvious that I’m
surprised I need to state them. Companies who want to build internal
training courses on copyleft compliance for their employees need to modify
the materials for that purpose. They then need to be able to freely
distribute them to employees and contractors for maximum effect.
Furthermore, like all documents and software alike, there are always
“bugs”, which (in the case of written prose) usually means
there are sections that are fail to communicate to maximum effect. Those
who find better ways to express the ideas need the ability to propose
patches and write improvements. Perhaps most importantly, everyone who
teaches should avoid
NIH syndrome. Education and
science work best when we borrow and share (with proper license-compliant
attribution, of course!) the best material that others develop, and augment
our works by incorporating them.

These reasons are akin to those that led Richard M. Stallman to write his
seminal
essay, Why
Software Should Be Free
. Indeed, if you reread that essay now
— as I just did — you’ll see that much of damage and many of
the same problems to the advancement of software that RMS documents in that
essay also occur in the world of tutorial documentation about FLOSS
licensing. As too often happens in the Open Source community, though,
folks seek ways to proprietarize, for profit, any copyrighted work that
doesn’t already have a copyleft license attached. In the field of copyleft
compliance education, we see the same behavior: organizations who wish to
control the dialogue and profit from selling compliance education seek to
proprietarize the meta-material of compliance education, rather than
sharing freely like the software itself. This yields an ironic
exploitation, since the copyleft license documented therein exists as a
strategy to assure the freedom to share knowledge. These educators tell
their audiences with a straight face: Sure, the software is
free as in freedom, but if you want to learn how its license
works, you have to license our proprietary materials!
This behavior
uses legal controls to curtail the sharing of knowledge, limits the
advancement and improvement of those tutorials, and emboldens silos of
know-how that only wealthy corporations have the resources to access and
afford. The educational dystopia that these organizations create is
precisely what I sought to prevent by advocating for software freedom for
so long.

While Conservancy’s primary job
provides non-profit infrastructure for Free
Software projects
, we also do a bit
of license compliance work as well.
But we practice what we preach: we release all the educational materials
that we produce as part of
the Copyleft Guide project
under CC BY-SA. Other Open Source organizations are currently hypocrites
on this point; they tout the values of openness and sharing of knowledge
through software, but they take their tutorial materials and lock them up
under proprietary licenses. I hereby publicly call on such organizations
(including but not limited to the Linux Foundation) to license
materials such
as
those under CC BY-SA.

I did not make this public call for liberation of such materials without
first trying friendly diplomacy first. Conservancy has been in talks with
individuals and staff who produce these materials for some time. We urged
them to join the Free Software community and share their materials under
free licenses. We even offered volunteer time to help them improve those
materials if they would simply license them freely. After two years of
that effort, it’s now abundantly clear that public pressure is the only
force that might work0. Ultimately, like all
proprietary businesses, the training divisions of Linux Foundation and
other entities in the compliance industrial complex (such
as Black Duck)
realize they can make much more revenue by making materials proprietary and
choosing legal restrictions that forbid their students from sharing and
improving the materials after they complete the course. While the reality
of this impasse regarding freely licensing these materials is probably an
obvious outcome, multiple sources inside these organizations have also
confirmed for me that liberation of the materials for the good of general
public won’t happen without a major paradigm shift — specifically
because such educational freedom will reduce the revenue stream around
those materials.

Of course, I can attest first-hand that freely liberating tutorial
materials curtails revenue. Karen Sandler and I have regularly taught
courses on copyleft licensing based
on the freely available materials
for a few years — most
recently in
January 2017 at LinuxConf Australia
and at
at
OSCON in a few weeks
. These conferences do kindly cover our travel
expenses to attend and teach the tutorial, but compliance education is not
a revenue stream for Conservancy. While, in an ideal world, we’d get
revenue from education to fund our other important activities, we believe
that there is value in doing this education as currently funded by
our individual Supporters; these education
efforts fit withour charitable mission to promote the public good. We
furthermore don’t believe that locking up the materials and refusing to
share them with others fits a mission of software freedom, so we never
considered such as a viable option. Finally, given the
institutionally-backed
FUD that we’ve
continue to witness, we seek to draw specific attention to the fundamental
difference in approach that Conservancy (as a charity) take toward this
compliance education work. (My
my recent talk on compliance
covered on LWN
includes some points on that matter, if you’d like
further reading).


0One notable exception to
these efforts was the success of my colleague, Karen Sandler (and others)
in convincing the OpenChain
project
to choose CC-0 licensing. However, OpenChain is not officially
part of the LF training curriculum to my knowledge, and if it is, it can of
course be proprietarized therein, since CC-0 is not a copyleft license.

Incident Response as "Hand-to-Hand Combat"

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/incident_respon_1.html

NSA Deputy Director Richard Ledgett described a 2014 Russian cyberattack against the US State Department as “hand-to-hand” combat:

“It was hand-to-hand combat,” said NSA Deputy Director Richard Ledgett, who described the incident at a recent cyber forum, but did not name the nation behind it. The culprit was identified by other current and former officials. Ledgett said the attackers’ thrust-and-parry moves inside the network while defenders were trying to kick them out amounted to “a new level of interaction between a cyber attacker and a defender.”

[…]

Fortunately, Ledgett said, the NSA, whose hackers penetrate foreign adversaries’ systems to glean intelligence, was able to spy on the attackers’ tools and tactics. “So we were able to see them teeing up new things to do,” Ledgett said. “That’s a really useful capability to have.”

I think this is the first public admission that we spy on foreign governments’ cyberwarriors for defensive purposes. He’s right: being able to spy on the attackers’ networks and see what they’re doing before they do it is a very useful capability. It’s something that was first exposed by the Snowden documents: that the NSA spies on enemy networks for defensive purposes.

Interesting is that another country first found out about the intrusion, and that they also have offensive capabilities inside Russia’s cyberattack units:

The NSA was alerted to the compromises by a Western intelligence agency. The ally had managed to hack not only the Russians’ computers, but also the surveillance cameras inside their workspace, according to the former officials. They monitored the hackers as they maneuvered inside the U.S. systems and as they walked in and out of the workspace, and were able to see faces, the officials said.

There’s a myth that it’s hard for the US to attribute these sorts of cyberattacks. It used to be, but for the US — and other countries with this kind of intelligence gathering capabilities — attribution is not hard. It’s not fast, which is its own problem, and of course it’s not perfect: but it’s not hard.

The CIA’s "Development Tradecraft DOs and DON’Ts"

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/the_cias_develo.html

Useful best practices for malware writers, courtesy of the CIA. Seems like a lot of good advice.

General:

  • DO obfuscate or encrypt all strings and configuration data that directly relate to tool functionality. Consideration should be made to also only de-obfuscating strings in-memory at the moment the data is needed. When a previously de-obfuscated value is no longer needed, it should be wiped from memory.

    Rationale: String data and/or configuration data is very useful to analysts and reverse-engineers.

  • DO NOT decrypt or de-obfuscate all string data or configuration data immediately upon execution.

    Rationale: Raises the difficulty for automated dynamic analysis of the binary to find sensitive data.

  • DO explicitly remove sensitive data (encryption keys, raw collection data, shellcode, uploaded modules, etc) from memory as soon as the data is no longer needed in plain-text form. DO NOT RELY ON THE OPERATING SYSTEM TO DO THIS UPON TERMINATION OF EXECUTION.

    Rationale: Raises the difficulty for incident response and forensics review.

  • DO utilize a deployment-time unique key for obfuscation/de-obfuscation of sensitive strings and configuration data.

    Rationale: Raises the difficulty of analysis of multiple deployments of the same tool.

  • DO strip all debug symbol information, manifests(MSVC artifact), build paths, developer usernames from the final build of a binary.

    Rationale: Raises the difficulty for analysis and reverse-engineering, and removes artifacts used for attribution/origination.

  • DO strip all debugging output (e.g. calls to printf(), OutputDebugString(), etc) from the final build of a tool.

    Rationale: Raises the difficulty for analysis and reverse-engineering.

  • DO NOT explicitly import/call functions that is not consistent with a tool’s overt functionality (i.e. WriteProcessMemory, VirtualAlloc, CreateRemoteThread, etc – for binary that is supposed to be a notepad replacement).

    Rationale: Lowers potential scrutiny of binary and slightly raises the difficulty for static analysis and reverse-engineering.

  • DO NOT export sensitive function names; if having exports are required for the binary, utilize an ordinal or a benign function name.

    Rationale: Raises the difficulty for analysis and reverse-engineering.

  • DO NOT generate crashdump files, coredump files, “Blue” screens, Dr Watson or other dialog pop-ups and/or other artifacts in the event of a program crash. DO attempt to force a program crash during unit testing in order to properly verify this.

    Rationale: Avoids suspicion by the end user and system admins, and raises the difficulty for incident response and reverse-engineering.

  • DO NOT perform operations that will cause the target computer to be unresponsive to the user (e.g. CPU spikes, screen flashes, screen “freezing”, etc).

    Rationale: Avoids unwanted attention from the user or system administrator to tool’s existence and behavior.

  • DO make all reasonable efforts to minimize binary file size for all binaries that will be uploaded to a remote target (without the use of packers or compression). Ideal binary file sizes should be under 150KB for a fully featured tool.

    Rationale: Shortens overall “time on air” not only to get the tool on target, but to time to execute functionality and clean-up.

  • DO provide a means to completely “uninstall”/”remove” implants, function hooks, injected threads, dropped files, registry keys, services, forked processes, etc whenever possible. Explicitly document (even if the documentation is “There is no uninstall for this “) the procedures, permissions required and side effects of removal.

    Rationale: Avoids unwanted data left on target. Also, proper documentation allows operators to make better operational risk assessment and fully understand the implications of using a tool or specific feature of a tool.

  • DO NOT leave dates/times such as compile timestamps, linker timestamps, build times, access times, etc. that correlate to general US core working hours (i.e. 8am-6pm Eastern time)

    Rationale: Avoids direct correlation to origination in the United States.

  • DO NOT leave data in a binary file that demonstrates CIA, USG, or its witting partner companies involvement in the creation or use of the binary/tool.

    Rationale: Attribution of binary/tool/etc by an adversary can cause irreversible impacts to past, present and future USG operations and equities.

  • DO NOT have data that contains CIA and USG cover terms, compartments, operation code names or other CIA and USG specific terminology in the binary.

    Rationale: Attribution of binary/tool/etc by an adversary can cause irreversible impacts to past, present and future USG operations and equities.

  • DO NOT have “dirty words” (see dirty word list – TBD) in the binary.

    Rationale: Dirty words, such as hacker terms, may cause unwarranted scrutiny of the binary file in question.

Networking:

  • DO use end-to-end encryption for all network communications. NEVER use networking protocols which break the end-to-end principle with respect to encryption of payloads.

    Rationale: Stifles network traffic analysis and avoids exposing operational/collection data.

  • DO NOT solely rely on SSL/TLS to secure data in transit.

    Rationale: Numerous man-in-middle attack vectors and publicly disclosed flaws in the protocol.

  • DO NOT allow network traffic, such as C2 packets, to be re-playable.

    Rationale: Protects the integrity of operational equities.

  • DO use ITEF RFC compliant network protocols as a blending layer. The actual data, which must be encrypted in transit across the network, should be tunneled through a well known and standardized protocol (e.g. HTTPS)

    Rationale: Custom protocols can stand-out to network analysts and IDS filters.

  • DO NOT break compliance of an RFC protocol that is being used as a blending layer. (i.e. Wireshark should not flag the traffic as being broken or mangled)

    Rationale: Broken network protocols can easily stand-out in IDS filters and network analysis.

  • DO use variable size and timing (aka jitter) of beacons/network communications. DO NOT predicatively send packets with a fixed size and timing.

    Rationale: Raises the difficulty of network analysis and correlation of network activity.

  • DO proper cleanup of network connections. DO NOT leave around stale network connections.

    Rationale: Raises the difficulty of network analysis and incident response.

Disk I/O:

  • DO explicitly document the “disk forensic footprint” that could be potentially created by various features of a binary/tool on a remote target.

    Rationale: Enables better operational risk assessments with knowledge of potential file system forensic artifacts.

  • DO NOT read, write and/or cache data to disk unnecessarily. Be cognizant of 3rd party code that may implicitly write/cache data to disk.

    Rationale: Lowers potential for forensic artifacts and potential signatures.

  • DO NOT write plain-text collection data to disk.

    Rationale: Raises difficulty of incident response and forensic analysis.

  • DO encrypt all data written to disk.

    Rationale: Disguises intent of file (collection, sensitive code, etc) and raises difficulty of forensic analysis and incident response.

  • DO utilize a secure erase when removing a file from disk that wipes at a minimum the file’s filename, datetime stamps (create, modify and access) and its content. (Note: The definition of “secure erase” varies from filesystem to filesystem, but at least a single pass of zeros of the data should be performed. The emphasis here is on removing all filesystem artifacts that could be useful during forensic analysis)

    Rationale: Raises difficulty of incident response and forensic analysis.

  • DO NOT perform Disk I/O operations that will cause the system to become unresponsive to the user or alerting to a System Administrator.

    Rationale: Avoids unwanted attention from the user or system administrator to tool’s existence and behavior.

  • DO NOT use a “magic header/footer” for encrypted files written to disk. All encrypted files should be completely opaque data files.

    Rationale: Avoids signature of custom file format’s magic values.

  • DO NOT use hard-coded filenames or filepaths when writing files to disk. This must be configurable at deployment time by the operator.

    Rationale: Allows operator to choose the proper filename that fits with in the operational target.

  • DO have a configurable maximum size limit and/or output file count for writing encrypted output files.

    Rationale: Avoids situations where a collection task can get out of control and fills the target’s disk; which will draw unwanted attention to the tool and/or the operation.

Dates/Time:

  • DO use GMT/UTC/Zulu as the time zone when comparing date/time.

    Rationale: Provides consistent behavior and helps ensure “triggers/beacons/etc” fire when expected.

  • DO NOT use US-centric timestamp formats such as MM-DD-YYYY. YYYYMMDD is generally preferred.

    Rationale: Maintains consistency across tools, and avoids associations with the United States.

PSP/AV:

  • DO NOT assume a “free” PSP product is the same as a “retail” copy. Test on all SKUs where possible.

    Rationale: While the PSP/AV product may come from the same vendor and appear to have the same features despite having different SKUs, they are not. Test on all SKUs where possible.

  • DO test PSPs with live (or recently live) internet connection where possible. NOTE: This can be a risk vs gain balance that requires careful consideration and should not be haphazardly done with in-development software. It is well known that PSP/AV products with a live internet connection can and do upload samples software based varying criteria.

    Rationale: PSP/AV products exhibit significant differences in behavior and detection when connected to the internet vise not.

Encryption: NOD publishes a Cryptography standard: “NOD Cryptographic Requirements v1.1 TOP SECRET.pdf“. Besides the guidance provided here, the requirements in that document should also be met.

The crypto requirements are complex and interesting. I’ll save commenting on them for another post.

News article.

A note about "false flag" operations

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/03/a-note-about-false-flag-operations.html

There’s nothing in the CIA #Vault7 leaks that calls into question strong attribution, like Russia being responsible for the DNC hacks. On the other hand, it does call into question weak attribution, like North Korea being responsible for the Sony hacks.

There are really two types of attribution. Strong attribution is a preponderance of evidence that would convince an unbiased, skeptical expert. Weak attribution is flimsy evidence that confirms what people are predisposed to believe.

The DNC hacks have strong evidence pointing to Russia. Not only does all the malware check out, but also other, harder to “false flag” bits, like active command-and-control servers. A serious operator could still false-flag this in theory, if only by bribing people in Russia, but nothing in the CIA dump hints at this.

The Sony hacks have weak evidence pointing to North Korea. One of the items was the use of the RawDisk driver, used both in malware attributed to North Korea and the Sony attacks. This was described as “flimsy” at the time [*]. The CIA dump [*] demonstrates that indeed it’s flimsy — as apparently CIA malware also uses the RawDisk code.

In the coming days, biased partisans are going to seize on the CIA leaks as proof of “false flag” operations, calling into question Russian hacks. No, this isn’t valid. We experts in the industry criticized “malware techniques” as flimsy attribution, long before the Sony attack, and long before the DNC hacks. All the CIA leaks do is prove we were right. On the other hand, the DNC hack attribution is based on more than just this, so nothing in the CIA leaks calls into question that attribution.

Some comments on the Wikileaks CIA/#vault7 leak

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/03/some-comments-on-wikileaks-ciavault7.html

I thought I’d write up some notes about the Wikileaks CIA “#vault7” leak. This post will be updated frequently over the next 24 hours.

The CIA didn’t remotely hack a TV. The docs are clear that they can update the software running on the TV using a USB drive. There’s no evidence of them doing so remotely over the Internet. If you aren’t afraid of the CIA breaking in an installing a listening device, then you should’t be afraid of the CIA installing listening software.

The CIA didn’t defeat Signal/WhatsApp encryption. The CIA has some exploits for Android/iPhone. If they can get on your phone, then of course they can record audio and screenshots. Technically, this bypasses/defeats encryption — but such phrases used by Wikileaks are highly misleading, since nothing related to Signal/WhatsApp is happening. What’s happening is the CIA is bypassing/defeating the phone. Sometimes. If they’ve got an exploit for it, or can trick you into installing their software.

There’s no overlap or turf war with the NSA. The NSA does “signals intelligence”, so they hack radios and remotely across the Internet. The CIA does “humans intelligence”, so they hack locally, with a human. The sort of thing they do is bribe, blackmail, or bedazzle some human “asset” (like a technician in a nuclear plant) to stick a USB drive into a slot. All the various military, law enforcement, and intelligence agencies have hacking groups to help them do their own missions.

The CIA isn’t more advanced than the NSA. Most of this dump is child’s play, simply malware/trojans cobbled together from bits found on the Internet. Sometimes they buy more advanced stuff from contractors, or get stuff shared from the NSA. Technologically, they are far behind the NSA in sophistication and technical expertise.

The CIA isn’t hoarding 0days. For one thing, few 0days were mentioned at all. The CIA’s techniques rely upon straightforward hacking, not super secret 0day hacking Second of all, they aren’t keeping 0days back in a vault somewhere — if they have 0days, they are using them.

The VEP process is nonsense. Activists keep mentioning the “vulnerability equities process”, in which all those interested in 0days within the government has a say in what happens to them, with the eventual goal that they be disclosed to vendors. The VEP is nonsense. The activist argument is nonsense. As far as I can tell, the VEP is designed as busy work to keep people away from those who really use 0days, such as the NSA and the CIA. If they spend millions of dollars buying 0days because it has that value in intelligence operations, they aren’t going to destroy that value by disclosing to a vendor. If VEP forces disclosure, disclosure still won’t happen, the NSA will simply stop buying vulns.

But they’ll have to disclose the 0days. Any 0days that were leaked to Wikileaks are, of course, no longer secret. Thus, while this leak isn’t an argument for unilateral disarmament in cyberspace, the CIA will have to disclose to vendor the vulns that are now in Russian hands, so that they can be fixed.

There’s no false flags. In several places, the CIA talks about making sure that what they do isn’t so unique, so it can’t be attributed to them. However, Wikileaks’s press release hints that the “UMBRAGE” program is deliberately stealing techniques from Russia to use as a false-flag operation. This is nonsense. For example, the DNC hack attribution was live command-and-control servers simultaneously used against different Russian targets — not a few snippets of code. [More here]

This hurts the CIA a lot. Already, one AV researcher has told me that a virus they once suspected came from the Russians or Chinese can now be attributed to the CIA, as it matches the description perfectly to something in the leak. We can develop anti-virus and intrusion-detection signatures based on this information that will defeat much of what we read in these documents. This would put a multi-year delay in the CIA’s development efforts. Plus, it’ll now go on a witch-hunt looking for the leaker, which will erode morale. Update: Three extremely smart and knowledgeable people who I respect disagree, claiming it won’t hurt the CIA a lot. I suppose I’m focusing on “hurting the cyber abilities” of the CIA, not the CIA as a whole, which mostly is non-cyber in function.

The CIA is not cutting edge. A few days ago, Hak5 started selling “BashBunny”, a USB hacking tool more advanced than the USB tools in the leak. The CIA seems to get most of their USB techniques from open-source projects, such Travis Goodpseeds “GoodFET” project.

The CIA isn’t spying on us. Snowden revealed how the NSA was surveilling all Americans. Nothing like that appears in the CIA dump. It’s all legitimate spy stuff (assuming you think spying on foreign adversaries is legitimate).

Update #2: How is hacking cars and phones not SIGINT (which is the NSA’s turf)?[*The answer is via physical access. For example, they might have a device that plugs into the ODBII port on the car that quickly updates the firmware of the brakes. Think of it as normal spy activity (e.g. cutting a victim’s brakes), but now with cyber.

Update #3: Apple iPhone. My vague sense is that CIA is more concerned about decrypting iPhones they get physical access to, rather than remotely hacking them and installing malware. CIA is HUMINT and covert ops, meaning they’ll punch somebody in the face, grab their iPhone, and run, then take it back to their lab and decrypt it.


WikiLeaks Releases CIA Hacking Tools

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/wikileaks_relea.html

WikiLeaks just released a cache of 8,761 classified CIA documents from 2012 to 2016, including details of its offensive Internet operations.

I have not read through any of them yet. If you see something interesting, tell us in the comments.

EDITED TO ADD: There’s a lot in here. Many of the hacking tools are redacted, with the tar files and zip archives replaced with messages like:

::: THIS ARCHIVE FILE IS STILL BEING EXAMINED BY WIKILEAKS. :::

::: IT MAY BE RELEASED IN THE NEAR FUTURE. WHAT FOLLOWS IS :::
::: AN AUTOMATICALLY GENERATED LIST OF ITS CONTENTS: :::

Hopefully we’ll get them eventually. The documents say that the CIA — and other intelligence services — can bypass Signal, WhatsApp and Telegram. It seems to be by hacking the end-user devices and grabbing the traffic before and after encryption, not by breaking the encryption.

New York Times article.

EDITED TO ADD: Some details from The Guardian:

According to the documents:

  • CIA hackers targeted smartphones and computers.
  • The Center for Cyber Intelligence is based at the CIA headquarters in Virginia but it has a second covert base in the US consulate in Frankfurt which covers Europe, the Middle East and Africa.
  • A programme called Weeping Angel describes how to attack a Samsung F8000 TV set so that it appears to be off but can still be used for monitoring.

I just noticed this from the WikiLeaks page:

Recently, the CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized “zero day” exploits, malware remote control systems and associated documentation. This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA. The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.

So it sounds like this cache of documents wasn’t taken from the CIA and given to WikiLeaks for publication, but has been passed around the community for a while — and incidentally some part of the cache was passed to WikiLeaks. So there are more documents out there, and others may release them in unredacted form.

Wired article. Slashdot thread. Two articles from the Washington Post.

EDITED TO ADD: This document talks about Comodo version 5.X and version 6.X. Version 6 was released in Feb 2013. Version 7 was released in Apr 2014. This gives us a time window of that page, and the cache in general. (WikiLeaks says that the documents cover 2013 to 2016.)

If these tools are a few years out of date, it’s similar to the NSA tools released by the “Shadow Brokers.” Most of us thought the Shadow Brokers were the Russians, specifically releasing older NSA tools that had diminished value as secrets. Could this be the Russians as well?

EDITED TO ADD: Nicholas Weaver comments.

EDITED TO ADD (3/8): These documents are interesting:

The CIA’s hand crafted hacking techniques pose a problem for the agency. Each technique it has created forms a “fingerprint” that can be used by forensic investigators to attribute multiple different attacks to the same entity.

This is analogous to finding the same distinctive knife wound on multiple separate murder victims. The unique wounding style creates suspicion that a single murderer is responsible. As soon one murder in the set is solved then the other murders also find likely attribution.

The CIA’s Remote Devices Branch‘s UMBRAGE group collects and maintains a substantial library of attack techniques ‘stolen’ from malware produced in other states including the Russian Federation.

With UMBRAGE and related projects the CIA cannot only increase its total number of attack types but also misdirect attribution by leaving behind the “fingerprints” of the groups that the attack techniques were stolen from.

UMBRAGE components cover keyloggers, password collection, webcam capture, data destruction, persistence, privilege escalation, stealth, anti-virus (PSP) avoidance and survey techniques.

This is being spun in the press as the CIA is pretending to be Russia. I’m not convinced that the documents support these allegations. Can someone else look at the documents. I don’t like my conclusion that WikiLeaks is using this document dump as a way to push their own bias.

How Stack Overflow plans to survive the next DNS attack

Post Syndicated from Mark Henderson original http://blog.serverfault.com/2017/01/09/surviving-the-next-dns-attack/

Let’s talk about DNS. After all, what could go wrong? It’s just cache invalidation and naming things.

tl;dr

This blog post is about how Stack Overflow and the rest of the Stack Exchange network approaches DNS:

  • By bench-marking different DNS providers and how we chose between them
  • By implementing multiple DNS providers
  • By deliberately breaking DNS to measure its impact
  • By validating our assumptions and testing implementations of the DNS standard

The good stuff in this post is in the middle, so feel free to scroll down to “The Dyn Attack” if you want to get straight into the meat and potatoes of this blog post.

The Domain Name System

DNS had its moment in the spotlight in October 2016, with a major Distributed Denial of Service (DDos) attack launched against Dyn, which affected the ability for Internet users to connect to some of their favourite websites, such as Twitter, CNN, imgur, Spotify, and literally thousands of other sites.

But for most systems administrators or website operators, DNS is mostly kept in a little black box, outsourced to a 3rd party, and mostly forgotten about. And, for the most part, this is the way it should be. But as you start to grow to 1.3+ billion pageviews a month with a website where performance is a feature, every little bit matters.

In this post, I’m going to explain some of the decisions we’ve made around DNS in the past, and where we’re going with it in the future. I will eschew deep technical details and gloss over low-level DNS implementation in favour of the broad strokes.

In the beginning

So first, a bit of history: In the beginning, we ran our own DNS on-premises using artisanally crafted zone files with BIND. It was fast enough when we were doing only a few hundred million hits a month, but eventually hand-crafted zonefiles were too much hassle to maintain reliably. When we moved to Cloudflare as our CDN, their service is intimately coupled with DNS, so we demoted our BIND boxes out of production and handed off DNS to Cloudflare.

The search for a new provider

Fast forward to early 2016 and we moved our CDN to Fastly. Fastly doesn’t provide DNS service, so we were back on our own in that regards and our search for a new DNS provider began. We made a list of every DNS provider we could think of, and ended up with a shortlist of 10:

  • Dyn
  • NS1
  • Amazon Route 53
  • Google Cloud DNS
  • Azure DNS (beta)
  • DNSimple
  • Godaddy
  • EdgeCast (Verizon)
  • Hurricane Electric
  • DNS Made Easy

From this list of 10 providers, we did our initial investigations into their service offerings, and started eliminating services that were either not suited to our needs, outrageously expensive, had insufficient SLAs, or didn’t offer services that we required (such as a fully featured API). Then we started performance testing. We did this by embedding a hidden iFrame on 5% of the visitors to stackoverflow.com, which forced a request to a different DNS provider. We did this for each provider until we had some pretty solid performance numbers.

Using some basic analytics, we were able to measure the real-world performance, as seen by our real-world users, broken down into geographical area. We built some box plots based on these tests which allowed us to visualise the different impact each provider had.

If you don’t know how to interpret a boxplot, here’s a brief primer for you. For the data nerds, these were generated with R’s standard boxplot functions, which means the upper and lower whiskers are min(max(x), Q_3 + 1.5 * IQR) and max(min(x), Q_1 – 1.5 * IQR), where IQR = Q_3 – Q_1

This is the results of our tests as seen by our users in the United States:

DNS Performance in the United States

You can see that Hurricane Electric had a quarter of requests return in < 16ms and a median of 32ms, with the three “cloud” providers (Azure, Google Cloud DNS and Route 53) being slightly slower (being around 24ms first quarter and 45ms median), and DNS Made Easy coming in 2nd place (20ms first quarter, 39ms median).

You might wonder why the scale on that chart goes all the way to 700ms when the whiskers go nowhere near that high. This is because we have a worldwide audience, so just looking at data from the United States is not sufficient. If we look at data from New Zealand, we see a very different story:

DNS Performance in New Zealand

Here you can see that Route 53, DNS Made Easy and Azure all have healthy 1st quarters, but Hurricane Electric and Google have very poor 1st quarters. Try to remember this, as this becomes important later on.

We also have Stack Overflow in Portuguese, so let’s check the performance from Brazil:

DNS Performance in Brazil

Here we can see Hurricane Electric, Route 53 and Azure being favoured, with Google and DNS Made Easy being slower.

So how do you reach a decision about which DNS provider to choose, when your main goal is performance? It’s difficult, because regardless of which provider you end up with, you are going to be choosing a provider that is sub-optimal for part of your audience.

You know what would be awesome? If we could have two DNS providers, each one servicing the areas that they do best! Thankfully this is something that is possible to implement with DNS. However, time was short, so we had to put our dual-provider design on the back-burner and just go with a single provider for the time being.

Our initial rollout of DNS was using Amazon Route 53 as our provider: they had acceptable performance figures over a large number of regions and had very effective pricing (on that note Route 53, Azure DNS, and Google Cloud DNS are all priced identically for basic DNS services).

The DYN attack

Roll forwards to October 2016. Route 53 had proven to be a stable, fast, and cost-effective DNS provider. We still had dual DNS providers on our backlog of projects, but like a lot of good ideas it got put on the back-burner until we had more time.

Then the Internet ground to a halt. The DNS provider Dyn had come under attack, knocking a large number of authoritative DNS servers off the Internet, and causing widespread issues with connecting to major websites. All of a sudden DNS had our attention again. Stack Overflow and Stack Exchange were not affected by the Dyn outage, but this was pure luck.

We knew if a DDoS of this scale happened to our DNS provider, the solution would be to have two completely separate DNS providers. That way, if one provider gets knocked off the Internet, we still have a fully functioning second provider who can pick up the slack. But there were still questions to be answered and assumptions to be validated:

  • What is the performance impact for our users in having multiple DNS providers, when both providers are working properly?
  • What is the performance impact for our users if one of the providers is offline?
  • What is the best number of nameservers to be using?
  • How are we going to keep our DNS providers in sync?

These were pretty serious questions – some of which we had hypothesis that needed to be checked and others that were answered in the DNS standards, but we know from experience that DNS providers in the wild do not always obey the DNS standards.

What is the performance impact for our users in having multiple DNS providers, when both providers are working properly?

This one should be fairly easy to test. We’ve already done it once, so let’s just do it again. We fired up our tests, as we did in early 2016, but this time we specified two DNS providers:

  • Route 53 & Google Cloud
  • Route 53 & Azure DNS
  • Route 53 & Our internal DNS

We did this simply by listing Name Servers from both providers in our domain registration (and obviously we set up the same records in the zones for both providers).

Running with Route 53 and Google or Azure was fairly common sense – Google and Azure had good coverage of the regions that Route 53 performed poorly in. Their pricing is identical to Route 53, which would make forecasting for the budget easy. As a third option, we decided to see what would happen if we took our formerly demoted, on-premises BIND servers and put them back into production as one of the providers. Let’s look at the data for the three regions from before: United States, New Zealand and Brazil:

United States
DNS Performance for dual providers in the United States

New Zealand
DNS Performance for dual providers in New Zealand

Brazil

DNS Performance for dual providers in Brazil

There is probably one thing you’ll notice immediately from these boxplots, but there’s also another not-so obvious change:

  1. Azure is not in there (the obvious one)
  2. Our 3rd quarters are measurably slower (the not-so obvious one).

Azure

Azure has a fatal flaw in their DNS offering, as of the writing of this blog post. They do not permit the modification of the NS records in the apex of your zone:

You cannot add to, remove, or modify the records in the automatically created NS record set at the zone apex (name = “@”). The only change that’s permitted is to modify the record set TTL.

These NS records are what your DNS provider says are authoritative DNS servers for a given domain. It’s very important that they are accurate and correct, because they will be cached by clients and DNS resolvers and are more authoritative than the records provided by your registrar.

Without going too much into the actual specifics of how DNS caching and NS records work (it would take me another 2,500 words to describe this in detail), what would happen is this: Whichever DNS provider you contact first would be the only DNS provider you could contact for that domain until your DNS cache expires. If Azure is contacted first, then only Azure’s nameservers will be cached and used. This defeats the purpose of having multiple DNS providers, as in the event that the provider you’ve landed on goes offline, which is roughly 50:50, you will have no other DNS provider to fall back to.

So until Azure adds the ability to modify the NS records in the apex of a zone, they’re off the table for a dual-provider setup.

The 3rd quarter

What the third quarter represents here is the impact of latency on DNS. You’ll notice that in the results for ExDNS (which is the internal name for our on-premises BIND servers) the box plot is much taller than the others. This is because those servers are located in New Jersey and Colorado – far, far away from where most of our visitors come from. So as expected, a service with only two points of presence in a single country (as opposed to dozens worldwide) performs very poorly for a lot of users.

Performance conclusions

So our choices were narrowed for us to Route 53 and Google Cloud, thanks to Azure’s lack of ability to modify critical NS records. Thankfully, we have the data to back up the fact that Route 53 combined with Google is a very acceptable combination.

Remember earlier, when I said that the performance of New Zealand was important? This is because Route 53 performed well, but Google Cloud performed poorly in that region. But look at the chart again. Don’t scroll up, I’ll show you another chart here:

Comparison for DNS performance data in New Zealand between single and dual providers

See how Google on its own performed very poorly in NZ (its 1st quarter is 164ms versus 27ms for Route 53)? However, when you combine Google and Route 53 together, the performance basically stays the same as when there was just Route 53.

Why is this? Well, it’s due to a technique called Smooth Round Trip Time. Basically, DNS resolvers (namely certain version of BIND and PowerDNS) keep track of which DNS servers respond faster, and weight queries towards those DNS servers. This means that the faster provider should be skewed to more often than the slower providers. There’s a nice presentation over here if you want to learn more about this. The short version is that if you have many DNS servers, DNS cache servers will favour the fastests ones. As a result, if one provider is fast in Auckland but slow in London, and another provider is the reverse, DNS cache servers in Auckland will favour the first provider and DNS cache servers in London will favor the other. This is a very little known feature of modern DNS servers but our testing shows that enough ISPs support it that we are confident we can rely on it.

What is the performance impact for our users if one of the providers is offline?

This is where having some on-premises DNS servers comes in very handy. What we can essentially do here is send a sample of our users to our on-premises servers, get a baseline performance measurement, then break one of the servers and run the performance measurements again. We can also measure in multiple places: We have our measurements as reported by our clients (what the end user actually experienced), and we can look at data from within our network to see what actually happened. For network analysis, we turned to our trusted network analysis tool, ExtraHop. This would allow us to look at the data on the wire, and get measurements from a broken DNS server (something you can’t do easily with a pcap on that server, because, you know. It’s broken).

Here’s what healthy performance looked like on the wire (as measured by ExtraHop), with two DNS servers, both of them fully operational, over a 24-hour period (this chart is additive for the two series):

DNS performance with two healthy name servers

Blue and brown are the two different, healthy DNS servers. As you can see, there’s a very even 50:50 split in request volume. Because both of the servers are located in the same datacenter, Smoothed Round Trip Time had no effect, and we had a nice even distribution – as we would expect.

Now, what happens when we take one of those DNS servers offline, to simulate a provider outage?

DNS performance with a broken nameserver

In this case, the blue DNS server was offline, and the brown DNS server was healthy. What we see here is that the blue, broken, DNS server received the same number of requests as it did when the DNS server was healthy, but the brown, healthy, DNS server saw twice as many requests. This is because those users who were hitting the broken server eventually retried their requests to the healthy server and started to favor it. So what does this look like in terms of actual client performance?

I’m only going to share one chart with you this time, because they were all essentially the same:

Comparison of healthy vs unhealthy DNS performance

What we see here is a substantial number of our visitors saw a performance decrease. For some it was minor, for others, quite major. This is because the 50% of visitors who hit the faulty server need to retry their request, and the amount of time it takes to retry that request seems to vary. You can see again a large increase in the long tail, which indicates that they are clients who took over 300 milliseconds to retry their request.

What does this tell us?

What this means is that in the event of a DNS provider going offline, we need to pull that DNS provider out of rotation to provide best performance, but until we do our users will still receive service. A non-trivial number of users will be seeing a large performance impact.

What is the best number of nameservers to be using?

Based on the previous performance testing, we can assume that the number of retries a client may have to make is N/2+1, where N is the number of nameservers listed. So if we list eight nameservers, with four from each provider, the client may potentially have to make 5 DNS requests before they finally get a successful message (the four failed requests, plus a final successful one). A statistician better than I would be able to tell you the exact probabilities of each scenario you would face, but the short answer here is:

Four.

We felt that based on our use case, and the performance penalty we were willing to take, we would be listing a total of four nameservers – two from each provider. This may not be the right decision for those who have a web presence orders of magnitudes larger than ours, but Facebook provide two nameservers on IPv4 and two on IPv6. Twitter provides eight, four from Dyn and four from Route 53. Google provides 4.

How are we going to keep our DNS providers in sync?

DNS has built in ways of keeping multiple servers in sync. You have domain transfers (IXFR, AXFR), which are usually triggered by a NOTIFY packet sent to all the servers listed as NS records in the zone. But these are not used in the wild very often, and have limited support from DNS providers. They also come with their own headaches, like maintaining an ACL IP Whitelist, of which there could be hundreds of potential servers (all the different points of presence from multiple providers), of which you do not control any. You also lose the ability to audit who changed which record, as they could be changed on any given server.

So we built a tool to keep our DNS in sync. We actually built this tool years ago, once our artisanally crafted zone files became too troublesome to edit by hand. The details of this tool are out of scope for this blog post though. If you want to learn about it, keep an eye out around March 2017 as we plan to open-source it. The tool lets us describe the DNS zone data in one place and push it to many different DNS providers.

So what did we learn?

The biggest takeaway from all of this, is that even if you have multiple DNS servers, DNS is still a single point of failure if they are all with the same provider and that provider goes offline. Until the Dyn attack this was pretty much “in theory” if you were using a large DNS provider, because until first the successful attack no large DNS provider had ever had an extended outage on all of its points of presence.

However, implementing multiple DNS providers is not entirely straightforward. There are performance considerations. You need to ensure that both of your zones are serving the same data. There can be such a thing as too many nameservers.

Lastly, we did all of this whilst following DNS best practices. We didn’t have to do any weird DNS trickery, or write our own DNS server to do non-standard things. When DNS was designed in 1987, I wonder if the authors knew the importance of what they were creating. I don’t know, but their design still stands strong and resilient today.

Attributions

  • Thanks to Camelia Nicollet for her work in R to produce the graphs in this blog post

Attributing the DNC Hacks to Russia

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/01/attributing_the_1.html

President Barack Obama’s public accusation of Russia as the source of the hacks in the US presidential election and the leaking of sensitive e-mails through WikiLeaks and other sources has opened up a debate on what constitutes sufficient evidence to attribute an attack in cyberspace. The answer is both complicated and inherently tied up in political considerations.

The administration is balancing political considerations and the inherent secrecy of electronic espionage with the need to justify its actions to the public. These issues will continue to plague us as more international conflict plays out in cyberspace.

It’s true that it’s easy for an attacker to hide who he is in cyberspace. We are unable to identify particular pieces of hardware and software around the world positively. We can’t verify the identity of someone sitting in front of a keyboard through computer data alone. Internet data packets don’t come with return addresses, and it’s easy for attackers to disguise their origins. For decades, hackers have used techniques such as jump hosts, VPNs, Tor and open relays to obscure their origin, and in many cases they work. I’m sure that many national intelligence agencies route their attacks through China, simply because everyone knows lots of attacks come from China.

On the other hand, there are techniques that can identify attackers with varying degrees of precision. It’s rarely just one thing, and you’ll often hear the term “constellation of evidence” to describe how a particular attacker is identified. It’s analogous to traditional detective work. Investigators collect clues and piece them together with known mode of operations. They look for elements that resemble other attacks and elements that are anomalies. The clues might involve ones and zeros, but the techniques go back to Sir Arthur Conan Doyle.

The University of Toronto-based organization Citizen Lab routinely attributes attacks against the computers of activists and dissidents to particular Third World governments. It took months to identify China as the source of the 2012 attacks against the New York Times. While it was uncontroversial to say that Russia was the source of a cyberattack against Estonia in 2007, no one knew if those attacks were authorized by the Russian government — until the attackers explained themselves. And it was the Internet security company CrowdStrike, which first attributed the attacks against the Democratic National Committee to Russian intelligence agencies in June, based on multiple pieces of evidence gathered from its forensic investigation.

Attribution is easier if you are monitoring broad swaths of the Internet. This gives the National Security Agency a singular advantage in the attribution game. The problem, of course, is that the NSA doesn’t want to publish what it knows.

Regardless of what the government knows and how it knows it, the decision of whether to make attribution evidence public is another matter. When Sony was attacked, many security experts — myself included­ — were skeptical of both the government’s attribution claims and the flimsy evidence associated with it. I only became convinced when the New York Times ran a story about the government’s attribution, which talked about both secret evidence inside the NSA and human intelligence assets inside North Korea. In contrast, when the Office of Personnel Management was breached in 2015, the US government decided not to accuse China publicly, either because it didn’t want to escalate the political situation or because it didn’t want to reveal any secret evidence.

The Obama administration has been more public about its evidence in the DNC case, but it has not been entirely public.

It’s one thing for the government to know who attacked it. It’s quite another for it to convince the public who attacked it. As attribution increasingly relies on secret evidence­ — as it did with North Korea’s attack of Sony in 2014 and almost certainly does regarding Russia and the previous election — ­the government is going to have to face the choice of making previously secret evidence public and burning sources and methods, or keeping it secret and facing perfectly reasonable skepticism.

If the government is going to take public action against a cyberattack, it needs to make its evidence public. But releasing secret evidence might get people killed, and it would make any future confidentiality assurances we make to human sources completely non-credible. This problem isn’t going away; secrecy helps the intelligence community, but it wounds our democracy.

The constellation of evidence attributing the attacks against the DNC, and subsequent release of information, is comprehensive. It’s possible that there was more than one attack. It’s possible that someone not associated with Russia leaked the information to WikiLeaks, although we have no idea where that someone else would have obtained the information. We know that the Russian actors who hacked the DNC­ — both the FSB, Russia’s principal security agency, and the GRU, Russia’s military intelligence unit — ­are also attacking other political networks around the world.

In the end, though, attribution comes down to whom you believe. When Citizen Lab writes a report outlining how a United Arab Emirates human rights defender was targeted with a cyberattack, we have no trouble believing that it was the UAE government. When Google identifies China as the source of attacks against Gmail users, we believe it just as easily.

Obama decided not to make the accusation public before the election so as not to be seen as influencing the election. Now, afterward, there are political implications in accepting that Russia hacked the DNC in an attempt to influence the US presidential election. But no amount of evidence can convince the unconvinceable.

The most important thing we can do right now is deter any country from trying this sort of thing in the future, and the political nature of the issue makes that harder. Right now, we’ve told the world that others can get away with manipulating our election process as long as they can keep their efforts secret until after one side wins. Obama has promised both secret retaliations and public ones. We need to hope they’re enough.

This essay previously appeared on CNN.com.

EDITED TO ADD: The ODNI released a declassified report on the Russian attacks. Here’s a New York Times article on the report.

And last week there were Senate hearings on this issue.

EDITED TO ADD: A Washington Post article talks about some of the intelligence behind the assessment.

EDITED TO ADD (1/10): The UK connection.

Some notes on IoCs

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/12/some-notes-on-iocs.html

Obama “sanctioned” Russia today for those DNC/election hacks, kicking out 35 diplomats (**), closing diplomatic compounds (**), seizing assets of named individuals/groups (***). They also published “IoCs” of those attacks, fingerprints/signatures that point back to the attackers, like virus patterns, file hashes, and IP addresses.

These IoCs are of low quality. They are published as a political tool, to prove they have evidence pointing to Russia. They have limited utility to defenders, or those publicly analyzing attacks.

Consider the Yara rule included in US-CERT’s “GRIZZLY STEPPE” announcement:

What is this? What does this mean? What do I do with this information?

It’s a YARA rule. YARA is a tool ostensibly for malware researchers, to quickly classify files. It’s not really an anti-virus product designed to prevent or detect an intrusion/infection, but to analyze an intrusion/infection afterward — such as attributing the attack. Signatures like this will identify a well-known file found on infected/hacked systems.

What this YARA rule detects is, as the name suggests, the “PAS TOOL WEB KIT”, a web shell tool that’s popular among Russia/Ukraine hackers. If you google “PAS TOOL PHP WEB KIT”, the second result points to the tool in question. You can download a copy here [*], or you can view it on GitHub here [*].

Once a hacker gets comfortable with a tool, they tend to keep using it. That implies the YARA rule is useful at tracking the activity of that hacker, to see which other attacks they’ve been involved in, since it will find the same web shell on all the victims.

The problem is that this P.A.S. web shell is popular, used by hundreds if not thousands of hackers, mostly associated with Russia, but also throughout the rest of the world (judging by hacker forum posts). This makes using the YARA signature for attribution problematic: just because you found P.A.S. in two different places doesn’t mean it’s the same hacker.

A web shell, by the way, is one of the most common things hackers use once they’ve broken into a server. It allows further hacking and exfiltration traffic to appear as normal web requests. It typically consists of a script file (PHP, ASP, PERL, etc.) that forwards commands to the local system. There are hundreds of popular web shells in use.

We have little visibility into how the government used these IoCs. IP addresses and YARA rules like this are weak, insufficient for attribution by themselves. On the other hand, if they’ve got web server logs from multiple victims where commands from those IP addresses went to this specific web shell, then the attribution would be strong that all these attacks are by the same actor.

In other words, these rules can be a reflection of the fact the government has excellent information for attribution. Or, it could be a reflection that they’ve got only weak bits and pieces. It’s impossible for us outsiders to tell. IoCs/signatures are fetishized in the cybersecurity community: they love the small rule, but they ignore the complexity and context around the rules, often misunderstanding what’s going on. (I’ve written thousands of the things — I’m constantly annoyed by the ignorance among those not understanding what they mean).

I see on twitter people praising the government for releasing these IoCs. What I’m trying to show here is that I’m not nearly as enthusiastic about their quality.


Note#1: BTW, the YARA rule has to trigger on the PHP statements, not on the imbedded BASE64 encoded stuff. That’s because it’s encrypted with a password, so could be different for every hacker.

Note#2: Yes, the hackers who use this tool can evade detection by minor changes that avoid this YARA rule. But that’s not a concern — the point is to track the hacker using this tool across many victims, to attribute attacks. The point is not to act as an anti-virus/intrusion-detection system that triggers on “signatures”.

Note#3: Publishing the YARA rule burns it. The hackers it detects will presumably move to different tools, like PASv4 instead of PASv3. Presumably, the FBI/NSA/etc. have a variety of YARA rules for various web shells used by know active hackers, to attribute attacks to various groups. They aren’t publishing these because they want to avoid burning those rules.

Note#4: The PDF from the DHS has pretty diagrams about the attacks, but it doesn’t appears this web shell was used in any of them. It’s difficult to see where it fits in the overall picture.


(**) No, not really. Apparently, kicking out the diplomats was punishment for something else, not related to the DNC hacks.

(***) It’s not clear if these “sanctions” have any teeth.

Unknowingly Linking to Infringing Content is Still Infringement, Court Rules

Post Syndicated from Andy original https://torrentfreak.com/unknowingly-linking-to-infringing-content-is-still-infringement-court-rules-161210/

In 2011, Dutch blog GeenStijl.nl published an article which linking to leaked Playboy photos which were stored on third-party hosting sites. Playboy publisher Sanoma said this amounted to infringement.

The European Court of Justice was asked to rule on whether the links posted by GeenStijl amounted to a ‘communication to the public’ under Article 3(1) of the EU Copyright Directive and therefore a copyright infringement.

In September the ECJ handed down its decision which drew a line in the sand between for-profit and not-for-profit linking.

Generally, a person who posts a casual link to a work freely available on another website isn’t expected to conduct research to find out whether the copyright holder has granted permission for that work to be there.

On the other hand, those who post a link within a commercial context are expected to carry out the “checks necessary” to ensure that the linked work has not been illegally published, the ECJ says.

The EU ruling has now been applied in a German case. It involved a photographer who discovered that one of his Creative Commons-licensed images had been posted on a website without the correct attribution. The photographer also found a third-party site that was linking to the infringing image.

Leipzig law firm Spirit Legal LLP says it represented the photographer in a case against the third-party site to discover how the ECJ’s earlier ruling would be applied under German law.

In a ruling handed down in November but just published by the law firm, the Hamburg Regional Court found that while the original publication of the image amounted to infringement, the operator of the third-party site that published the link to the image was also liable for infringement.

In part, liability was determined from the conclusion that the link communicated the infringing image to a “new audience,” outside that intended by the copyright holder.

On the commercial aspect previously cited by the ECJ, the Court found that no profit needed to be generated from the specific link, only that there was a general intent to benefit from the link on the site as a whole. Since that was the case, the website owner was responsible for carrying out the “checks necessary” to ensure that the image wasn’t infringing.

Finding the operator of the third-party website liable for infringement, the Court ordered him to cease providing a link to the photographer’s image or face a fine.

Dr. Jonas Kahl, a lawyer with Spirit Legal, says the ruling is important for anyone with online commercial interests.

“[T]he decision of the Hamburg Regional Court represents a massive tightening of their inspection obligations and their liability in itself,” Dr. Kahl explains.

“In order to exclude the possibility of a copyright infringement, in the future you should check, before each linking, whether the page operator has the necessary rights for the photos published there. If this is not the case, you should not link, if you do not want to take a liability risk.”

The full ruling is available here (PDF, German)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Beware: Attribution & Politics

Post Syndicated from Elizabeth Wharton original http://blog.erratasec.com/2016/09/beware-attribution-politics.html

tl;dr – Digital location data can be inherently wrong and it can be spoofed. Blindly assuming that it is accurate can make an ass out of you on twitter and when regulating drones.    

Guest contributor and friend of Errata Security Elizabeth Wharton (@LawyerLiz) is an attorney and host of the technology-focused weekly radio show “Buzz Off with Lawyer Liz” on America’s Web Radio (listen live  each Wednesday, 2-3:00pm eastern; find  prior podcasts here or via iTunes – Lawyer Liz) This post is merely her musings and not legal advice.

Filtering through various campaign and debate analysis on social media, a tweet caught my eye. The message itself was not the concern and the underlying image has since been determined to be fake.  Rather, I was stopped by the140 character tweet’s absolute certainty that internet user location data is infallible.  The author presented a data map as proof without question, caveat, or other investigation.  Boom, mic drop – attribution!

According to the tweeting pundit, “Russian trollbots” are behind the #TrumpWon hashtag trending on Twitter.

The proof? The twitter post claims that the Trendsmap showed the initial hashtag tweets as originating from accounts located in Russia.  Within the first hour the tweet and accompanying map graphic was “liked” 1,400 times and retweeted 1,495 times. A gotcha moment because a pew-pew map showed that the #TrumpWon hashtag originated from Twitter accounts located in Russia.  Boom, mic drop – attribution!

Except, not so fast. First, Trendsmap has since clarified that the map and data in the tweet above are not theirs (the Washington Post details the faked data/map ).  Moreover, location data is tricky.  According to the Trendsmap FAQ page they use the location provided in a user’s profile and GeoIP provided by Google. Google’s GeoIP is crafted using a proprietary system and other databases such as MaxMind.  IP mapping is not an exact art.  Kashmir Hill, editor of Fusion’s Real Future, and David Maynor, delved into the issues and inaccuracies of IP mapping earlier this year.  Kashmir wrote extensively on their findings and how phantom IP addresses and MaxMind’s use of randomly selected default locations created digital hells for individuals all over the country –  Internet Mapping Glitch Turned Random Farm into Digital Hell.

Reliance on such mapping and location information as an absolute has tripped up law enforcement and is poised to trip up the drone industry. Certain lawmakers like to point to geofencing and other location applications as security and safety cure-all solutions. Sen. Schumer (D-N.Y.) previously included geofencing as a key element of his 2015 drone safety bill.  Geofencing as a safety measure was mentioned during Tuesday’s U.S. House Small Business Committee hearing on Commercial Drone Operations. With geofencing, the drone is programmed to prohibit operations above a certain height or to keep out of certain locations.  Attempt to fly in a prohibited area and the aircraft will automatically shut down.  Geofencing relies on location data, including geospatial data collected from a variety of sources.  As seen with GeoIP, data can be wrong.  Additionally, the data must be interpreted and analyzed by the aircraft’s software systems.  Aircraft systems are not built with security first, in some cases basic systems security measures have been completely overlooked.  With mandatory geofencing, wrong data or spoofed (hacked) data can ground the aircraft.

Location mapping is helpful, one data point among many.  Beware of attribution and laws predicated solely on information that can be inaccurate by design. One errant political tweet blaming Russian twitter users based on bad data may lead to a “Pants on Fire” fact check.  Even if initially correct, a bored 400lb hacker may have spoofed the data.

(post updated to add link to “Buzz Off with Lawyer Liz Show” website and pic per Rob’s request)

LLVM contemplates relicensing

Post Syndicated from corbet original http://lwn.net/Articles/701155/rss

The LLVM project is currently distributed under the BSD-like NCSA license, but the
project is considering a change in the interest of better patent
protection. “After extensive discussion involving many lawyers with different
affiliations, we recommend taking the approach of using the Apache 2.0
license, with the binary attribution exception (discussed before), and add
an additional exception to handle the situation of GPL2 compatibility if it
ever arises.

Someone Is Learning How to Take Down the Internet

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/someone_is_lear.html

Over the past year or two, someone has been probing the defenses of the companies that run critical pieces of the Internet. These probes take the form of precisely calibrated attacks designed to determine exactly how well these companies can defend themselves, and what would be required to take them down. We don’t know who is doing this, but it feels like a large nation state. China or Russia would be my first guesses.

First, a little background. If you want to take a network off the Internet, the easiest way to do it is with a distributed denial-of-service attack (DDoS). Like the name says, this is an attack designed to prevent legitimate users from getting to the site. There are subtleties, but basically it means blasting so much data at the site that it’s overwhelmed. These attacks are not new: hackers do this to sites they don’t like, and criminals have done it as a method of extortion. There is an entire industry, with an arsenal of technologies, devoted to DDoS defense. But largely it’s a matter of bandwidth. If the attacker has a bigger fire hose of data than the defender has, the attacker wins.

Recently, some of the major companies that provide the basic infrastructure that makes the Internet work have seen an increase in DDoS attacks against them. Moreover, they have seen a certain profile of attacks. These attacks are significantly larger than the ones they’re used to seeing. They last longer. They’re more sophisticated. And they look like probing. One week, the attack would start at a particular level of attack and slowly ramp up before stopping. The next week, it would start at that higher point and continue. And so on, along those lines, as if the attacker were looking for the exact point of failure.

The attacks are also configured in such a way as to see what the company’s total defenses are. There are many different ways to launch a DDoS attack. The more attack vectors you employ simultaneously, the more different defenses the defender has to counter with. These companies are seeing more attacks using three or four different vectors. This means that the companies have to use everything they’ve got to defend themselves. They can’t hold anything back. They’re forced to demonstrate their defense capabilities for the attacker.

I am unable to give details, because these companies spoke with me under condition of anonymity. But this all is consistent with what Verisign is reporting. Verisign is the registrar for many popular top-level Internet domains, like .com and .net. If it goes down, there’s a global blackout of all websites and e-mail addresses in the most common top-level domains. Every quarter, Verisign publishes a DDoS trends report. While its publication doesn’t have the level of detail I heard from the companies I spoke with, the trends are the same: “in Q2 2016, attacks continued to become more frequent, persistent, and complex.”

There’s more. One company told me about a variety of probing attacks in addition to the DDoS attacks: testing the ability to manipulate Internet addresses and routes, seeing how long it takes the defenders to respond, and so on. Someone is extensively testing the core defensive capabilities of the companies that provide critical Internet services.

Who would do this? It doesn’t seem like something an activist, criminal, or researcher would do. Profiling core infrastructure is common practice in espionage and intelligence gathering. It’s not normal for companies to do that. Furthermore, the size and scale of these probes — and especially their persistence — points to state actors. It feels like a nation’s military cybercommand trying to calibrate its weaponry in the case of cyberwar. It reminds me of the US’s Cold War program of flying high-altitude planes over the Soviet Union to force their air-defense systems to turn on, to map their capabilities.

What can we do about this? Nothing, really. We don’t know where the attacks come from. The data I see suggests China, an assessment shared by the people I spoke with. On the other hand, it’s possible to disguise the country of origin for these sorts of attacks. The NSA, which has more surveillance in the Internet backbone than everyone else combined, probably has a better idea, but unless the US decides to make an international incident over this, we won’t see any attribution.

But this is happening. And people should know.

This essay previously appeared on Lawfare.com.

EDITED TO ADD: Slashdot thread.

EDITED TO ADD (9/15): Podcast with me on the topic.

AWS Hot Startups – August 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-august-2016/

Back with her second guest post, Tina Barr talks about four more hot startups!


Jeff;


This month we are featuring four hot AWS-powered startups:

  • Craftsvilla – Offering a platform to purchase ethnic goods.
  • SendBird – Helping developers build 1-on-1 messaging and group chat quickly.
  • Teletext.io – A solution for content management, without the system.
  • Wavefront – A cloud-based analytics platform.

Craftsvilla
Craftsvilla was born in 2011 out of sheer love and appreciation for the crafts, arts, and culture of India. On a road trip through the Gujarat region of western India, Monica and Manoj Gupta were mesmerized by the beautiful creations crafted by local artisans. However, they were equally dismayed that these artisans were struggling to make ends meet. Monica and Manoj set out to create a platform where these highly skilled workers could connect directly with their consumers and reach a much broader audience. The demand for authentic ethnic products is huge across the globe, but consumers are often unable to find the right place to buy them. Craftsvilla helps to solve this issue.

The culture of India is so rich and diverse, that no one had attempted to capture it on a single platform. Using technological innovations, Craftsvilla combines apparel, accessories, health and beauty products, food items and home décor all in one easily accessible space. For instance, they not only offer a variety of clothing (Salwar suits, sarees, lehengas, and casual wear) but each of those categories are further broken down into subcategories. Consumers can find anything that fits their needs – they can filter products by fabric, style, occasion, and even by the type of work (embroidered, beads, crystal work, handcrafted, etc.). If you are interested in trying new cuisine, Craftsvilla can help. They offer hundreds of interesting products from masalas to traditional sweets to delicious tea blends. They even give you the option to filter through India’s many diverse regions to discover new foods.

Becoming a seller on Craftsvilla is simple. Shop owners just need to create a free account and they’re able to start selling their unique products and services. Craftsvilla’s ultimate vision is to become the ‘one-stop destination’ for all things ethnic. They look to be well on their way!

AWS itself is an engineer on Craftsvilla’s team. Customer experience is highly important to the people behind the company, and an integral aspect of their business is to attain scalability with efficiency. They automate their infrastructure at a large scale, which wouldn’t be possible at the current pace without AWS. Currently, they utilize over 20 AWS services – Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing, Amazon Kinesis, AWS Lambda, Amazon Relational Database Service (RDS), Amazon Redshift, and Amazon Virtual Private Cloud to name a few. Their app QA process will move to AWS Device Farm, completely automated in the cloud, on 250+ services thanks to Lambda. Craftsvilla relies completely on AWS for all of their infrastructure needs, from web serving to analytics.

Check out Craftsvilla’s blog for more information!

SendBird
After successfully exiting their first startup, SendBird founders John S. Kim, Brandon Jeon, Harry Kim, and Forest Lee saw a great market opportunity for a consumer app developer. Today, over 2,000 global companies such as eBay, Nexon, Beat, Malang Studio, and SK Telecom are using SendBird to implement chat and messaging capabilities on their mobiles apps and websites. A few ways companies are using SendBird:

  • 1-on-1 messaging for private messaging and conversational commerce.
  • Group chat for friends and interest groups.
  • Massive scale chat rooms for live-video streams and game communities.

As they watched messaging become a global phenomenon, the SendBird founders realized that it no longer made sense for app developers to build their entire tech stack from scratch. Research from the Localytics Data Team actually shows that in-app messaging can increase app launches by 27% and engagement by 3 times. By simply downloading the SendBird SDK (available for iOS, Android, Unity, .NET Xamarin, and JavaScript), app and web developers can implement real-time messaging features in just minutes. SendBird also provides a full chat history and allows users to send chat messages in addition to complete file and data transfers. Moreover, developers can integrate innovative features such as smart-throttling to control the speed of messages being displayed to the mobile devices during live broadcasting.

After graduating from accelerator Y Combinator W16 Batch, the company grew from 1,000,000 monthly chat users to 5,000,000 monthly chat users within months while handling millions of new messages daily across live-video streaming, games, ecommerce, and consumer apps. Customers found value in having a cross-platform, full-featured, and whole-stack approach to a real-time chat API and SDK which can be deployed in a short period of time.

SendBird chose AWS to build a robust and scalable infrastructure to handle a massive concurrent user base scattered across the globe. It uses EC2 with Elastic Load Balancing and Auto Scaling, Route 53, S3, ElastiCache, Amazon Aurora, CloudFront, CloudWatch, and SNS. The company expects to continue partnering with AWS to scale efficiently and reliably.

Check out SendBird and their blog to follow their journey!

Teletext.io
Marcel Panse and Sander Nagtegaal, co-founders of Teletext.io, had worked together at several startups and experienced the same problem at each one: within the scope of custom software development, content management is a big pain. Even the smallest correction, such as a typo, typically requires a developer, which can become very expensive over time. Unable to find a proper solution that was readily available, Marcel and Sander decided to create their own service to finally solve the issue. Leveraging only the API Gateway, Lambda functions, Amazon DynamoDB, S3, and CloudFront, they built a drop-in content management service (CMS). Their serverless approach for a CMS alternative quickly attracted other companies, and despite intending to use it only for their own needs, the pair decided to professionally market their idea and Teletext.io was born.

Today, Teletext.io is called a solution for content management, without the system. Content distributors are able to edit text and images through a WYSIWYG editor without the help of a programmer and directly from their own website or user interface. There are just three easy steps to get started:

  1. Include Teletext.io script.
  2. Add data attributes.
  3. Login and start typing.

That’s it! There is no system that needs to be installed or maintained by developers – Teletext.io works directly out of the box. In addition to recurring content updates, the data attribution technique can also be used for localization purposes. Making a website multilingual through a CMS can take days or weeks, but Teletext.io can accomplish this task in mere minutes. The time-saving factor is the main benefit for developers and editors alike.

Teletext.io uses AWS in a variety of ways. Since the company is responsible for the website content of others, they must have an extremely fast and reliable system that keeps website visitors from noticing external content being loaded. In addition, this critical infrastructure service should never go down. Both of these requirements call for a robust architecture with as few moving parts as possible. For these reasons, Teletext.io runs a serverless architecture that really makes it stand out. For loading draft content, storing edits and images, and publishing the result, the Amazon API Gateway gets called, triggering AWS Lambda functions. The Lambda functions store their data in Amazon DynamoDB.

Read more about Teletext.io’s unique serverless approach in their blog post.

Wavefront
Founded in 2013 and based in Palo Alto, Wavefront is a cloud-based analytics platform that stores time series data at millions of points per second. They are able to detect any divergence from “normal” in hybrid and cloud infrastructures before anomalies ever happen. This is a critical service that companies like Lyft, Okta, Yammer, and Box are using to keep running smoothly. From data scientists to product managers, from startups to Fortune 500 companies, Wavefront offers a powerful query engine and a language designed for everyone.

With a pay-as-you-go model, Wavefront gives customers the flexibility to start with the necessary application size and scale up/down as needed. They also include enterprise-class support as part of their pricing at no extra cost. Take a look at their product demos to learn more about how Wavefront is helping their customers.

The Wavefront Application is hosted entirely on AWS, and runs its single-tenant instances and multi-tenant instances in the virtual private cloud (VPC) clusters within AWS. The application has deep, native integrations with CloudWatch and CloudTrail, which benefits many of its larger customers also using AWS. Wavefront uses AWS to create a “software problem”, to operate, automate and monitor clouds using its own application. Most importantly, AWS allows Wavefront to focus on its core business – to build the best enterprise cloud monitoring system in the world.

To learn more about Wavefront, check out their blog post, How Does Wavefront Work!

Tina Barr