Tag Archives: dfa

Cloudflare Kicking ‘Daily Stormer’ is Bad News For Pirate Sites

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-kicking-daily-stormer-is-bad-news-for-pirate-sites-170817/

“I woke up this morning in a bad mood and decided to kick them off the Internet.”

Those are the words of Cloudflare CEO Matthew Prince, who decided to terminate the account of controversial Neo-Nazi site Daily Stormer.

Bam. Gone. At least for a while.

Although many people are happy to see the site go offline, the decision is not without consequence. It goes directly against what many saw as the core values of the company.

For years on end, Cloudflare has been asked to remove terrorist propaganda, pirate sites, and other possibly unacceptable content. Each time, Cloudflare replied that it doesn’t take action without a court order. No exceptions.

“Even if it were able to, Cloudfare does not monitor, evaluate, judge or store content appearing on a third party website,” the company wrote just a few weeks ago, in its whitepaper on intermediary liability.

“We’re the plumbers of the internet. We make the pipes work but it’s not right for us to inspect what is or isn’t going through the pipes,” Cloudflare CEO Matthew Prince himself said not too long ago.

“If companies like ours or ISPs start censoring there would be an uproar. It would lead us down a path of internet censors and controls akin to a country like China,” he added.

The same arguments were repeated in different contexts, over and over.

This strong position was also one of the reasons why Cloudflare was dragged into various copyright infringement court cases. In these cases, the company repeatedly stressed that removing a site from Cloudflare’s service would not make infringing content disappear.

Pirate sites would just require a simple DNS reconfiguration to continue their operation, after all.

“[T]here are no measures of any kind that CloudFlare could take to prevent this alleged infringement, because the termination of CloudFlare’s CDN services would have no impact on the existence and ability of these allegedly infringing websites to continue to operate,” it said.

That comment looks rather misplaced now that the CEO of the same company has decided to “kick” a website “off the Internet” after an emotional, but deliberate, decision.

Taking a page from Cloudflare’s (old) playbook we’re not going to make any judgments here. Just search Twitter or any social media site and you’ll see plenty of opinions, both for and against the company’s actions.

We do have a prediction though. During the months and years to come, Cloudflare is likely to be dragged into many more copyright lawsuits, and when they are, their counterparts are going to bring up Cloudflare’s voluntary decision to kick a website off the Internet.

Unless Cloudflare suddenly decides to pull all pirate sites from its service tomorrow, of course.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Usenet Pirate Pays €4,800 ‘Fine’ After Being Exposed by Provider

Post Syndicated from Ernesto original https://torrentfreak.com/usenet-pirate-pays-e4800-fine-after-being-exposed-by-provider-170811/

Dutch anti-piracy outfit BREIN has been very active over the past several years, targeting uploaders on various sharing sites and services.

They cast their net wide and have gone after torrent users, Facebook groups, YouTube pirates and Usenet uploaders as well.

To pinpoint the latter group, BREIN contacts Usenet providers asking them to reveal the identity of a suspected user. This is also what happened in a case involving a former customer of Eweka.

The person in question, known under the alias ‘Badfan69,’ was accused of uploading 9,538 infringing works to Usenet, mostly older titles. After Eweka handed over his home address, BREIN reached out to him and negotiated a settlement.

The 44-year-old man has now agreed to pay a settlement of €4,800. If he continues to upload infringing content he will face an additional penalty of €2,000 per day, to a maximum of €50,000.

The case is an important victory for BREIN, not just because of the money.

When the anti-piracy group reached out to Usenet provider Eweka, the company initially refused to hand over any personal details. The Usenet provider argued that it’s a neutral intermediary that would rather not perform the role of piracy police. Instead, it wanted the court to decide whether the request was legitimate.

This resulted in a legal dispute where, earlier this year, a local court sided with BREIN. The Court stressed that in these type of copyright infringement cases, the Usenet provider is required to hand over the requested details.

Under Dutch law, ISPs can be obliged to hand over the personal details of their customers if the infringing activity is plausible and the damaged party has a legitimate interest. Importantly, the legal case clarified that this generally doesn’t require an intervention from the court.

“Providers must decide on a motivated request for the handover of a user’s address, based on their own consideration. A refusal to provide the information must be motivated, otherwise, it will be illegal and the provider will be charged for the costs,” BREIN notes.

While these Usenet cases are relatively rare, BREIN and other parties in the Netherlands, such as Dutch Filmworks, are also planning to go after large groups of torrent users. With the Usenet decision in hand, BREIN may want to argue that regular ISPs must also expose pirating users, without an intervention of the court.

This is not going to happen easily though. Several ISPs, most prominently Ziggo, announced that they would not voluntarily cooperate and are likely to fight out these requests in court to get a solid ‘torrent’ precedent.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Pirate Bay Isn’t Affected By Adverse Court Rulings – Everyone Else Is

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-isnt-affected-by-adverse-court-rulings-everyone-else-is-170618/

For more than a decade The Pirate Bay has been the world’s most controversial site. Delivering huge quantities of copyrighted content to the masses, the platform is revered and reviled across the copyright spectrum.

Its reputation is one of a defiant Internet swashbuckler, but due to changes in how the site has been run in more recent times, its current philosophy is more difficult to gauge. What has never been in doubt, however, is the site’s original intent to be as provocative as possible.

Through endless publicity stunts, some real, some just for the ‘lulz’, The Pirate Bay managed to attract a massive audience, all while incurring the wrath of every major copyright holder in the world.

Make no mistake, they all queued up to strike back, but every subsequent rightsholder action was met by a Pirate Bay middle finger, two fingers, or chin flick, depending on the mood of the day. This only served to further delight the masses, who happily spread the word while keeping their torrents flowing.

This vicious circle of being targeted by the entertainment industries, mocking them, and then reaping the traffic benefits, developed into the cheapest long-term marketing campaign the Internet had ever seen. But nothing is ever truly for free and there have been consequences.

After taunting Hollywood and the music industry with its refusals to capitulate, endless legal action that the site would have ordinarily been forced to participate in largely took place without The Pirate Bay being present. It doesn’t take a law degree to work out what happened in each and every one of those cases, whatever complex route they took through the legal system. No defense, no win.

For example, the web-blocking phenomenon across the UK, Europe, Asia and Australia was driven by the site’s absolute resilience and although there would clearly have been other scapegoats had The Pirate Bay disappeared, the site was the ideal bogeyman the copyright lobby required to move forward.

Filing blocking lawsuits while bringing hosts, advertisers, and ISPs on board for anti-piracy initiatives were also made easier with the ‘evil’ Pirate Bay still online. Immune from every anti-piracy technique under the sun, the existence of the platform in the face of all onslaughts only strengthened the cases of those arguing for even more drastic measures.

Over a decade, this has meant a significant tightening of the sharing and streaming climate. Without any big legislative changes but plenty of case law against The Pirate Bay, web-blocking is now a walk in the park, ad hoc domain seizures are a fairly regular occurrence, and few companies want to host sharing sites. Advertisers and brands are also hesitant over where they place their ads. It’s a very different world to the one of 10 years ago.

While it would be wrong to attribute every tightening of the noose to the actions of The Pirate Bay, there’s little doubt that the site and its chaotic image played a huge role in where copyright enforcement is today. The platform set out to provoke and succeeded in every way possible, gaining supporters in their millions. It could also be argued it kicked a hole in a hornets’ nest, releasing the hell inside.

But perhaps the site’s most amazing achievement is the way it has managed to stay online, despite all the turmoil.

This week yet another ruling, this time from the powerful European Court of Justice, found that by offering links in the manner it does, The Pirate Bay and other sites are liable for communicating copyright works to the public. Of course, this prompted the usual swathe of articles claiming that this could be the final nail in the site’s coffin.


In common with every ruling, legal defeat, and legislative restriction put in place due to the site’s activities, this week’s decision from the ECJ will have zero effect on the Pirate Bay’s availability. For right or wrong, the site was breaking the law long before this ruling and will continue to do so until it decides otherwise.

What we have instead is a further tightened legal landscape that will have a lasting effect on everything BUT the site, including weaker torrent sites, Internet users, and user-uploaded content sites such as YouTube.

With The Pirate Bay carrying on regardless, that is nothing short of remarkable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS GovCloud (US) Heads East – New Region in the Works for 2018

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-govcloud-us-heads-east-new-region-in-the-works-for-2018/

AWS GovCloud (US) gives AWS customers a place to host sensitive data and regulated workloads in the AWS Cloud. The first AWS GovCloud (US) Region was launched in 2011 and is located on the west coast of the US.

I’m happy to announce that we are working on a second Region that we expect to open in 2018. The upcoming AWS GovCloud (US-East) Region will provide customers with added redundancy, data durability, and resiliency, and will also provide additional options for disaster recovery.

Like the existing region, which we now call AWS GovCloud (US-West), the new region will be isolated and meet top US government compliance requirements including International Traffic in Arms Regulations (ITAR), NIST standards, Federal Risk and Authorization Management Program (FedRAMP) Moderate and High, Department of Defense Impact Levels 2-4, DFARs, IRS1075, and Criminal Justice Information Services (CJIS) requirements. Visit the GovCloud (US) page to learn more about the compliance regimes that we support.

Government agencies and the IT contactors that serve them were early adopters of AWS GovCloud (US), as were companies in regulated industries. These organizations are able to enjoy the flexibility and cost-effectiveness of public cloud while benefiting from the isolation and data protection offered by a region designed and built to meet their regulatory needs and to help them to meet their compliance requirements. Here’s a small sample from our customer base:

Federal (US) GovernmentDepartment of Veterans Affairs, General Services Administration 18F (Digital Services Delivery), NASA JPL, Defense Digital Service, United States Air Force, United States Department of Justice.

Regulated IndustriesCSRA, Talen Energy, Cobham Electronics.

SaaS and Solution ProvidersFIGmd, Blackboard, Splunk, GitHub, Motorola.

Federal, state, and local agencies that want to move their existing applications to the AWS Cloud can take advantage of the AWS Cloud Adoption Framework (CAF) offered by AWS Professional Services.




Usenet Provider is Obliged to Identify Pirates, Court Rules

Post Syndicated from Ernesto original https://torrentfreak.com/usenet-provider-has-to-identify-pirates-court-rules-170609/

Dutch anti-piracy group BREIN has targeted pirates of all shapes and sizes over the past several years.

It’s also one of the few groups that actively tracks down copyright infringers on Usenet, which still has millions of frequent users.

BREIN sets its aim on prolific uploaders and other large-scale copyright infringers. After identifying its targets, it asks providers to reveal the personal details connected to the account.

Last December, BREIN asked Usenet provider Eweka to hand over the personal details of one of its former customers but the provider refused to cooperate voluntarily.

In its defense, the Usenet provider argued that it’s a neutral intermediary that would rather not perform the role of piracy police. Instead, it preferred to rely on the court to make a decision.

The provider had already taken a similar position earlier last year, but the Court of Haarlem ruled that it must hand over the information.

In a new ruling this week, the Court issued a similar order.

The Court stressed that in these type of situations the Usenet provider is required to hand over the requested details, without intervention from the court. This is in line with case law.

Under Dutch law, ISPs can be obliged to hand over the personal details of their customers if the infringing activity is plausible and the aggrieved party has a legitimate interest.

The former Eweka customer was known under the alias ‘Badfan69’ and previously uploaded 9,538 allegedly infringing works to Usenet, Tweakers reports. He was tracked down through information from the headers of the binaries he posted.

BREIN is pleased with the verdict, which once again strengthens its position in cases where third-party providers hold information on infringing customers.

“Most of the intermediaries adhere to the law and voluntarily provide the relevant data when BREIN makes a motivated request,” BREIN director Tim Kuik responds.

“They have to decide quickly because rightsholders have an interest in stopping uploaders and holding them liable as soon as possible. This sentence emphasizes this once again.”

The court ordered Eweka to pay legal fees of roughly 1,500 euros. In addition, the provider faces a penalty of 1,000 euros per day, to a maximum of 100,000 euros, if it fails to hand over the requested information in its possession.

Eweka hasn’t commented publicly on the verdict yet. But, with two rulings in favor of BREIN, it is unlikely that the provider will continue to fight similar cases in the future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Cloudflare Doesn’t Want to Become the ‘Piracy Police’

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-doesnt-want-to-be-the-piracy-police-170413/

As one of the leading CDN and DDoS protection services, Cloudflare is used by millions of websites across the globe.

This includes thousands of “pirate” sites, including the likes of The Pirate Bay and ExtraTorrent, which rely on the U.S.-based company to keep server loads down.

Copyright holders are not happy that CloudFlare services these sites. Last year, the RIAA and MPAA called the company out for aiding copyright infringers and helping pirate sites to obfuscate their actual location.

The rightsholders want Internet services such as Cloudflare to help them address online piracy more effectively. They are pushing for voluntary agreements to go above and beyond what the law prescribes them to do.

In the UK, for example, search engines have agreed to do more to hinder piracy, and advertisers, payment processors, and ISPs have also taken more active roles in combatting infringement.

In a whitepaper, Cloudflare sees this trend as a worrying development. The company points out that the safe harbor provisions put in place by the DMCA and Europe’s eCommerce Directive have been effective in fostering innovation for many years. Voluntary “anti-piracy” agreements may change this.

“Slowly however, a wider net of intermediaries — from hosting providers to search engines, eCommerce platforms and other internet players — have been encouraged to help address new societal challenges, to help ‘clean up the web’, and effectively become internet police. Innovation continues but at the same time is threatened,” Cloudflare writes.

In addition, rightsholders are trying to update current legislation to increase liability for Internet services. In Europe, for example, a new copyright law proposal will make piracy filtering systems mandatory for some Internet services.

In its whitepaper, Cloudflare argues the such “back-door attempts to update legislation” should be closely monitored.

Instead of putting the blame on outsiders, copyright holders should change their views and embrace the Internet, the company argues. There are plenty of opportunities on the Internet, and the losses rightholders claim are often overblown rhetoric.

“Internet innovation has kept pace but many content creators and rights-holders have not adapted, and many content creators claim a loss in earning power as a result of online piracy.”

According to Cloudflare, content creators are often too quick to put the blame onto others, out of frustration.

“Many rights-holders are frustrated by their own inability to monetize the exchange of protected content and so the internet is seen not as a digital opportunity but rather a digital threat.”

Cloudflare argues that increased monitoring and censorship are not proper solutions. Third-party Internet services shouldn’t be pushed into the role of Internet police out of a fear of piracy.

Instead, the company cautions against far-reaching voluntary agreements that may come at the expense of the public.

“Voluntary measures have their limits and care must be taken not to have intermediaries be pushed into the area of excessive monitoring or indeed censorship. Intermediaries should not be forced to act as judge and jury, and indeed putting commercial entities in such a position is dangerous.”

Cloudfare stresses that it does not monitor, evaluate, judge or store content on sites operated by its clients, nor has it plans to do so. The company merely acts as a neutral ‘reverse proxy’ and operates within the boundaries of the law

Of course, Cloudflare isn’t completely deaf to the concerns of copyright holders. Among other things, it has a trusted notifier program that allows rightsholders to obtain the true location of pirate sites that use the service. However, they explicitly say ‘no’ to proactive monitoring.

“Policy makers should not look for quick, short-term solutions to other complex problems of the moment involving the internet. A firehose approach which soaks anyone and everyone standing around an issue, is simply not the way forward,” the company writes.

The full whitepaper titled “Intermediary Liability: Safeguarding Digital Innovation and the Role of Internet Intermediaries” is avaialble here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Blizzard Wants $8.5 Milion Copyright Damages From “Cheat” Maker

Post Syndicated from Ernesto original https://torrentfreak.com/blizzard-wants-8-5-milion-copyright-damages-from-cheat-maker-170314/

Over the years video game developer and publisher Blizzard Entertainment has released many popular game titles including Overwatch and World of Warcraft.

While most gamers stick to the rules, there’s also a small group that tries to game the system. By using cheats, they play with an advantage over regular users.

The German outfit Bossland is behind several popular cheats including “Honorbuddy”, Demonbuddy, and the currently unavailable “Watchover Tyrant”. Blizzard has been fighting the company on its home turf for several years already and filed a complaint at a federal court in California as well last year.

In the complaint, Blizzard accused the cheat maker of various forms of copyright infringement, unfair competition, and violating the DMCA’s anti-circumvention provision. According to Blizzard the bots and cheats also caused millions of dollars in lost sales, as they ruin the games for many legitimate players.

After Bossland had failed to have the case dismissed over a lack of jurisdiction, things went quiet earlier this year. Bossland stopped responding, and when the Court gave the German company a 24-hour ultimatum to reply, it remained silent.

The WoW Honorbot


In response, Blizzard has now submitted a motion for default judgment. According to the game developer, it is clear that Bossland violated the DMCA by selling its “circumvention” tools and it demands to be compensated in return.

Blizzard says it prefers a conservative estimate of the damages. Bossland previously testified that it sold 118,939 products to users in the United States since July of 2013, and Blizzard projects that at a minimum, 36% of these sales were cheats for their games.

This translates to 42,818 infringements for a total of well over $8 million is statutory damages.

“In this case, Blizzard is only seeking the minimum statutory damages of $200 per infringement, for a total of $8,563,600.00. While Blizzard would surely be entitled to seek a larger amount, Blizzard seeks only minimum statutory damages.

“Blizzard does not seek such damages as a “punitive” measure against Bossland or to obtain an unjustified windfall,” the game developer adds (pdf).

According to Blizzard, it is a “calculated and bad-faith tactic” of the German cheat manufacturer to go for a default judgment. In doing so, the company tries to shield its alleged unlawful conduct from the reach of United States.

Adding to that, the game developer believes that Bossland’s revenue from the cheats may have been even higher than the damages they are asking for.

“Notably, $200 approximates the cost of a one-year license for the Bossland Hacks. So, it is very likely that Bossland actually received far more than $8 million in connection with its sale of the Bossland Hacks.”

Since Bossland failed to defend itself, it is likely that Blizzard will get a substantial damages award. However, whether they will ever see a penny from the cheat maker is less certain.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Court: Hosting A Pirate Site Doesn’t Equal Copyright Infringement

Post Syndicated from Ernesto original https://torrentfreak.com/court-hosting-a-pirate-site-doesnt-equal-copyright-infringement-170221/

Last year, adult entertainment publisher ALS Scan took things up a notch by dragging several third-party intermediaries to court.

The company targeted CDN provider CloudFlare, advertising network JuicyAds, and several hosting providers, including Chicago-based Steadfast.

Steadfast was not happy with the allegations and has recently asked the court to dismiss the case. Among other things, the company argued that it’s protected by the DMCA’s safe harbor provisions.

“Steadfast does not operate or manage the Imagebam website. Steadfast does not in any way communicate with or interact with Imagebam’s individual users. Steadfast only provides computer storage,” the company wrote in its motion to dismiss.

In a tentative ruling issued this week, the California District Court agrees that the allegations in the second amended complaint (SAC) are not sufficient to hold the hosting company liable.

Merely hosting a pirate website is not enough to argue that the host contributes to the alleged copyright infringement on the image sharing site, Judge George Wu argues (pdf).

“In short, the Court is unaware of any authority holding that merely alleging that a defendant provides some form of ‘hosting’ service to an infringing website is sufficient to establish contributory copyright infringement.

“The Court would therefore find that the SAC fails to allege facts establishing that Steadfast materially contributed to the infringement,” Wu adds.

Among other things, the Court notes that ALS Scan fails to allege that Steadfast provides its hosting services with the goal to promote copyright infringement, or that it directly encouraged Imagebam to show pirated content on its website.

In addition, the vicarious liability allegation is insufficient too. This requires the copyright holder to show that the host has control over the infringing actions and that it financially benefits from them, which is not the case here.

“Here, the SAC contains no allegations that Steadfast has a direct financial interest in the infringing activity or has the right and ability to stop the infringing conduct,” Judge Wu writes.

As a result of the lacking evidence and allegations to support a secondary liability claim, the Court tentatively granted Steadfast’s motion to dismiss.

The ruling does keep the door open for ALS Scan to file an improved complaint, but for now, the victory goes to the hosting provider.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Judge Splits $750 Piracy Penalty Between BitTorrent Peers

Post Syndicated from Ernesto original https://torrentfreak.com/judge-splits-750-piracy-penalty-between-bittorrent-peers-170217/

Many Hollywood insiders see online piracy as a major threat, but only very few are willing to target alleged file-sharers with lawsuits.

LHF Productions, one of the companies behind the blockbuster “London Has Fallen,” has no problem crossing this line. Since the first pirated copies of the film appeared online last year, the company has been suing alleged downloaders in multiple courts.

As is usual in these cases, defendants get the option to sign a quick settlement to resolve the matter or defend their case in court. Those who ignore the lawsuits completely face a default judgment, which can turn out to be quite expensive depending on the Judge.

This week, Judge Ricardo Martinez ruled over a series of LHF cases at the Seattle District Court. The movie company requested default judgments against 28 defendants in five cases, demanding $2,500 from each defendant.

When the accused downloaders don’t defend themselves, judges nearly always rule in the plaintiff’s favor, which is also true for these cases. However, Judge Martinez decided not to award the requested penalties in full.

The filmmaker had argued that $2,500, and even more in attorney’s fees and costs, is a rather modest request. However, in his order this week the Judge sees things differently.

“The Court also acknowledges that the amount at stake is not, as LHF contends, modest – LHF seeks enhanced statutory damages in the amount of $2,500 along with $2,605.50 in attorneys’ fees, and amounts ranging between $90 and $150 in costs, for each named Defendant in this matter,” he writes (pdf).

Instead, the Judge places the damages amount at the statutory minimum, which is $750.

Even more interesting, and the first time we’ve seen this happening, is that the penalty will be split among the swarm members in each case. The filmmakers alleged that the defendants were part of the same swarm, so they are all liable for the same infringement, Judge Martinez argues.

“Because the named Defendants in this action were alleged to have conspired with one another to infringe the same digital copy of LHF’s motion picture, the Court will award the sum of $750 for Defendants’ infringement of the same digital copy of London Has Fallen.”

“Each of the Defendants is jointly and severally liable for this amount,” Judge Martinez adds in his order.

This means that in one of the cases, where there are eight defaulted defendants, each has to pay just over $93 in damages.

As for the lowered damages amount itself, the Judge clarifies that these type of cases are not intended to result in large profits. Especially not, when the rightsholders have made little effort to prove actual damage or to track down the original sharer.

“The Court is not persuaded. Statutory damages are not intended to serve as a windfall to plaintiffs, and enhanced statutory damages are not warranted where plaintiffs do not even try to demonstrate actual damages.”

In addition to limiting the penalty, the Judge also reduced the requested attorney’s fees. Since the case was mostly based on identical complaints and motions, the court had trouble believing that the law firm spent hundreds of hours in preparation.

Instead, the court granted only $550 in attorney’s fees per defendant. This means that the default defendants will have to pay a few hundred dollars each, instead of the $5,000 plus the filmmakers wanted.

According to the Fight Copyright Trolls blog, which first published details of the unusual order, splitting the awards between the defendants in the same swarm could turn out to be a “fatal blow” to these type of lawsuits.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Hosting Provider Steadfast Denies Liability for ‘Pirate’ Site

Post Syndicated from Ernesto original https://torrentfreak.com/hosting-provider-steadfast-denies-liability-for-pirate-site-170205/

steadfastCopyright holders are increasingly urging third-party Internet services to cut their ties with pirate sites.

Hosting providers, search engines, ISPs, domain name registrars, and advertisers should all do more to counter online piracy, the argument goes.

Last year, adult entertainment publisher ALS Scan took things up a notch by dragging several third-party intermediaries to court. The company targeted CDN provider CloudFlare, advertising network JuicyAds, and several hosting providers, including Chicago-based Steadfast.

Steadfast is not happy with the allegations and has asked the court to dismiss the case. Among other things, the company argues that it’s protected by the DMCA’s safe harbor provisions.

“Steadfast does not operate or manage the Imagebam website. Steadfast does not in any way communicate with or interact with Imagebam’s individual users. Steadfast only provides computer storage,” the company informed the court in its motion to dismiss.

ALS Scan clearly disagrees with this reasoning. According to the adult company, Steadfast should have stopped the infringements on the website of their client.

In addition, the company says that the hosting provider can’t hide behind “safe harbor” protection as it failed to implement a repeat infringer policy, branding ImageBam a frequent offender.

“Steadfast could remove the infringements on imagebam.com, or the site itself, from the Internet. Steadfast financially benefited from the draw of infringement on imagebam.com,” ALS Scan wrote in its opposition brief (pdf) last week.

“Steadfast’s safe harbor defenses are intensely factual, not susceptible of resolution on demurrer. Steadfast failed to reasonably implement a policy of terminating account holders who are repeat infringers, and thus cannot claim DMCA safe harbors,” they add.

Earlier this week Steadfast responded to these and other claims by the adult publisher, arguing that the company is misrepresenting case-law.

The hosting provider maintains that the DMCA law shields it from liability. The repeat infringer argument doesn’t apply here, as they company doesn’t have the ability to control the actions of ImageBam users, among other things.

“In its Opposition, ALS states that in order to avoid liability for contributory infringement, a service provider must terminate services to repeat infringers. This is simply not the law. The service provider must have more power to influence the activity,” Steadfast argues in its reply (pdf).

It is now up to the California District Court to decide which side is right. In addition to Steadfast, several other defendants including CloudFlare are still trying to turn the case in their favor as well.

While ALS Scan is not an internationally known rightsholder, the case may prove to be vital for many Internet-based services in the United States. As we’ve seen with the case between Cox Communication and BMG, an entire industry is put at risk when a service provider loses its safe harbor protection.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Hello World – a new magazine for educators

Post Syndicated from Philip Colligan original https://www.raspberrypi.org/blog/hello-world-new-magazine-for-educators/

Today, the Raspberry Pi Foundation is launching a new, free resource for educators.

Hello World – a new magazine for educators

Hello World is a magazine about computing and digital making written by educators, for educators. With three issues each year, it contains 100 pages filled with news, features, teaching resources, reviews, research and much more. It is designed to be cross-curricular and useful to all kinds of educators, from classroom teachers to librarians.

Hello World is a magazine about computing and digital making written by educators, for educators. With three issues each year, it contains 100 pages filled with news, features, teaching resources, reviews, research and much more.

It is designed to be cross-curricular and useful to all kinds of educators, from classroom teachers to librarians.  While it includes lots of great examples of how educators are using Raspberry Pi computers in education, it is device- and platform-neutral.

Community building

As with everything we do at the Raspberry Pi Foundation, Hello World is about community building. Our goal is to provide a resource that will help educators connect, share great practice, and learn from each other.

Hello World is a collaboration between the Raspberry Pi Foundation and Computing at School, the grass-roots organisation of computing teachers that’s part of the British Computing Society. The magazine builds on the fantastic legacy of Switched On, which it replaces as the official magazine for the Computing at School community.

We’re thrilled that many of the contributors to Switched On have agreed to continue writing for Hello World. They’re joined by educators and researchers from across the globe, as well as the team behind the amazing MagPi, the official Raspberry Pi magazine, who are producing Hello World.

print (“Hello, World!”)

Hello World is available free, forever, for everyone online as a downloadable pdf.  The content is written to be internationally relevant, and includes features on the most interesting developments and best practices from around the world.

The very first issue of Hello World, the magazine about computing and digital making for educators

Thanks to the very generous support of our sponsors BT, we are also offering the magazine in a beautiful print version, delivered for free to the homes of serving educators in the UK.

Papert’s legacy 

This first issue is dedicated to Seymour Papert, in many ways the godfather of computing education. Papert was the creator of the Logo programming language and the author of some of the most important research on the role of computers in education. It will come at no surprise that his legacy has a big influence on our work at the Raspberry Pi Foundation, not least because one of our co-founders, Jack Lang, did a summer internship with Papert.

Seymour Papert

Seymour Papert with one of his computer games at the MIT Media Lab
Credit: Steve Liss/The Life Images Collection/Getty Images

Inside you’ll find articles exploring Papert’s influence on how we think about learning, on the rise of the maker movement, and on the software that is used to teach computing today from Scratch to Greenfoot.

Get involved

We will publish three issues of Hello World a year, timed to coincide with the start of the school terms here in the UK. We’d love to hear your feedback on this first issue, and please let us know what you’d like to see covered in future issues too.

The magazine is by educators, for educators. So if you have experience, insights or practical examples that you can share, get in touch: [email protected].

The post Hello World – a new magazine for educators appeared first on Raspberry Pi.

Harry Potter and the Real-life Weasley Clock

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/harry-potter-real-life-weasley-clock/

Pat Peters (such a wonderful Marvel-sounding name) recently shared his take on the Weasley Clock, a device that hangs on the wall of The Burrow, the rickety home inhabited by the Weasley family in the Harry Potter series.

Mrs. Weasley glanced at the grandfather clock in the corner. Harry liked this clock. It was completely useless if you wanted to know the time, but otherwise very informative. It had nine golden hands, and each of them was engraved with one of the Weasley family’s names. There were no numerals around the face, but descriptions of where each family member might be. “Home,” “school,” and “work” were there, but there was also “traveling,” “lost,” “hospital,” “prison,” and, in the position where the number twelve would be on a normal clock, “mortal peril.”

The clock in the movie has misplaced “mortal peril”, but aside from that it looks a lot like what we’d imagined from the books.

There’s a reason why more and more Harry Potter-themed builds are appearing online. The small size of devices such as the Raspberry Pi and Arduino allow for a digital ‘brain’ to live within an ordinary object, allowing control over it that you could easily confuse with magic…if you allow yourself to believe in such things.

So with last week’s Real-life Daily Prophet doing so well, it’s only right to share another Harry Potter-inspired project.

Harry Potter Weasley Clock

The clock serves not to tell the time but, rather, to indicate the location of Molly, Arthur and the horde of Weasley children. And using the OwnTracks GPS app for smartphones, Pat’s clock does exactly the same thing.

Pat Peters Weasley Clock Raspberry Pi

Pat has posted the entire build on instructables, allowing every budding witch and wizard (and possibly a curious Muggle or two) the chance to build their own Weasley Clock.

This location clock works through a Raspberry Pi that subscribes to an MQTT broker that our phone’s publish events to. Our phones (running the OwnTracks GPS app) send a message to the broker anytime we cross into or out of one of our waypoints that we have set up in OwnTracks, which then triggers the Raspberry Pi to run a servo that moves the clock hand to show our location.

There are no words for how much we love this. Here at Pi Towers we definitely have a soft spot for Harry Potter-themed builds, so make sure to share your own with us in the comments below, or across our social media channels on Facebook, Twitter, Instagram, YouTube and G+.

The post Harry Potter and the Real-life Weasley Clock appeared first on Raspberry Pi.

ISP Says it Won’t Send BREIN’s Anti-Piracy Warnings

Post Syndicated from Andy original https://torrentfreak.com/isp-says-it-wont-send-breins-anti-piracy-warnings-170118/

breinlogoAs one of Europe’s most prominent anti-piracy groups, BREIN is at the forefront of copyright enforcement in the Netherlands. In early January the outfit revealed some its achievements over the past year, including enforcement actions against hundreds of sites and prolific uploaders of pirate content.

While tackling those closer to the top of the tree, BREIN has had a tendency to leave regular ‘pirate’ users alone. However, in recent times it has been developing plans to target Internet subscribers with ‘educational’ warning notices.

This past weekend, BREIN chief Tim Kuik said that his group hopes to bring about behavioral change among downloaders by contacting them via their ISPs.

“The ISPs can then send the account holder a warning which informs them that their account has been used to infringe copyright. The message is that they are bringing you up to date with illegal activities,” Kuik said.

Last year, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens) gave BREIN permission to collect the IP-addresses of pirating BitTorrent users, allowing the group to target uploaders on a broader scale. But the group still needs help from service providers, since it needs to tie those addresses to individual accounts.

“The Data Protection Authority recommended we make arrangements with the ISPs on the processing of the personal data. Because we do not have the identity of the user,” Kuik said on Sunday.

However, unlike in the US and UK where similar programs are already underway, Dutch ISPs are giving the plan a less than warm welcome. In comments yesterday, leading cable provider Ziggo confirmed it will not participate in BREIN’s program.

“As an ISP we are a neutral access provider. This does not include the role of active enforcement of rights or interests of third parties, including BREIN,” said spokesman Erik van Doeselaar, as quoted by Tweakers.

Other providers aren’t excited by BREIN’s plans either. KPN, based in The Hague, said that there are many unknowns when it comes to privacy.

“As an ISP we can not pass judgment on the legality and proportionality of the plan,” said spokesman Stijn Wesselink.

A third ISP, XS4All, said the anti-piracy outfit’s plans haven’t yet been made clear.

“I won’t the slam the door before I’ve seen [BREIN’s] plans, but it seems highly unlikely that ISPs will act as enforcers,” said spokesman Niels Huijbregts.

BREIN, on the other hand, believe that ISPs should cooperate, since when customers download and share copyrighted content without permission, they breach their providers’ Terms of Service.

The anti-piracy outfit hopes to introduce a scheme similar to the one now underway in the UK, which has received cooperation from four major ISPs.

BREIN says it wishes to mirror the UK effort by having ISPs send educational notices to encourage users towards legal services. However, the anti-piracy outfit is not on the best of terms with local providers and hasn’t been for many years.

Both Ziggo and XS4All are currently embroiled in prolonged legal battle with BREIN, who want the providers to block subscriber access to The Pirate Bay.

Thus far the ISPs have refused, steadfastly sticking to their position that, as a service provider, the copyright wars are not their battle. It now seems likely that the same stance will carry over to the proposed warning notice scheme.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Battle of The Bots! ‘Dubious’ Pirate Sites Trigger ‘Bogus’ Takedowns

Post Syndicated from Ernesto original https://torrentfreak.com/battle-of-the-bots-dubious-pirate-sites-trigger-bogus-takedowns-170107/

robotPirate sites come in all shapes and sizes, ranging from torrent indexes, through streaming portals, to MP3 download sites.

Since it’s not always easy to get visitors, some of these sites employ some dubious tricks to draw an audience. By using machine generated pages filled with complete nonsense, for example.

Let’s take this “ShareMP3.link” page for example. At first sight, it appears to be a regular MP3 download portal, offering music from popular artists. However, it does more than that.

In fact, the site offers a result for every search term, generating pages on the fly. Whether it’s for “TorrentFreak,” “sdfasgf56u” or “Pizzagate,” there’s always an MP3 available

This results in interesting pages such as the following, offering the latest TorrentFreak music.


Or what about this page, with some of the latest Twittergate and Pizzagate tunes ready for download?


These and many other similar sites appear to grab content from external sources such as YouTube, regardless of whether it’s actual music. In addition to confusing the public, they are also triggering bots at some takedown companies.

The Twittergate / Pizzagate page mentioned earlier was actually targeted in a recent notice sent by AudioLock for a completely unrelated artist named “靳松”.

While the odd name is likely an encoding issue, the link and many others listed in the complaint have little to do with it. Unfortunately, this isn’t an isolated incident either as we can easily spot several other mistakes for “pizzagate” or “trump putin,” for example.

So why are these pages flagged as being ‘pirate’ then? Well, these type of sites generally list random links to other keyword searches on their download pages, which may at one point have linked to an infringing term. Still, the identified page itself is something entirely different.

In addition, takedowns may also be triggered because a keyword is similar to one used by the artist in question, such as the band “New Order” in the takedown notice below.


Making matters worse, the whole situation might be self-reinforcing at times. That is, when the DMCA bots search for something that is then automatically generated, this likely creates more pages with the same results and more takedowns.

Luckily, the public is largely kept away from this battle. They are just machines fighting each other in a perpetual and utterly useless war.

Copyright holders, however, might want to reconsider whether this is how they want to target piracy on the Internet. After all, they are the ones paying the bills for these dubious practices.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Australian Govt Advisory Body Digs in Over Fair Use & Geo-Unblocking

Post Syndicated from Andy original https://torrentfreak.com/australian-govt-agency-digs-in-over-fair-use-geo-unblocking-161222/

copyright-bloodEarlier this year, Australia’s Productivity Commission released a draft report covering various aspects of the country’s intellectual property system.

Among the Commission’s recommendations was advice to the government that it should allow citizens to access geo-blocked content in order for them to obtain the best deals on international content.

“Geoblocking results in Australians paying higher prices (often for a lesser or later service) than consumers overseas,” the draft read.

The report also urged the introduction of fair use provisions into local copyright law instead of the current “fair dealing” arrangement.

“Australia’s copyright system has expanded over time, often with no transparent, evidence-based policy analysis demonstrating the need for, or quantum of, new rights. A new system of user rights, including the introduction of a broad, principles-based fair use exception, is needed to help address this imbalance,” the report said.

During the summer, copyright holders fought back, claiming that fair use would have a negative effect on creation. Music group IFPI, for example, warned that fair use would threaten innovation and disadvantage creators while creating legal uncertainty.

“Licensing, not exceptions to copyright, drives innovation. Innovation is best achieved through licensing agreements between content owners and users, including technological innovators,” IFPI said. In December, similar arguments were presented in a new campaign championed by local celebrities.

But in a final inquiry report sent to the government in September and published this week, the Commission’s position remains unmoved.

“Rights holders have argued against the adoption of fair use in Australia. They claim that by design, fair use is imprecise and would create significant legal uncertainty for both rightsholders and users. Initial uncertainty is not a compelling reason to eschew a fair use exception, especially if it serves to preserve poor policy outcomes,” the Commission writes.

“Australia’s current exceptions are themselves subject to legal uncertainty, and evidence suggests that fair use cases, as shown in the US, are more predictable than rights holders argue. Moreover, courts routinely apply principles-based law to new cases, such as in consumer and employment law, updating case law when the circumstances warrant doing so.”

The Commission says that over time, both rightsholders and users will become “increasingly comfortable” when making judgments over what is and is not fair use. In the event that Courts are called on to decide, four factors should be considered.

• the purpose and character of the use
• the nature of the copyright material
• the amount and substantiality of the part used
• the effect of the use upon the potential market for, or value of, the copyright material.

“Rights holders also argued fair use would significantly reduce their incentives to create and invest in new works, holding up Canada as an example. Some have proclaimed that fair use will equate with ‘free use’, particularly by the education sector. But these concerns are ill-founded and premised on flawed (and self-interested) assumptions,” the Commission writes.

“Indeed, rather than ignore the interests of rights holders, under fair use the effect on the rights holder is one of the factors to be considered. Where a use of copyright material harms a rights holder, the use is less likely to be considered fair. In the US, where fair use is long established, creative industries thrive.”

Fair Use recommedation from the Commissionrecco-51

And when it comes to allowing Australians unfettered access to legitimate content, the Commission remains equally unmoved. It notes that prompt access to reasonably priced content is vital in the fight against piracy and the government should change the law to make it clear to consumers that they have the right to obtain content from overseas, should that mean getting a better deal.

“Research consistently demonstrates that timely and cost effective access to
copyright-protected works is the best way for industry to reduce online copyright
infringement. Therefore, in addition to implementing a new exception for fair use, the Commission is recommending making it easier for users to access legitimate copyright-protected content,” the inquiry report reads.

“Studies show Australian consumers systematically pay higher prices for professional software, music, games and e-books than consumers in comparable overseas markets. While some digital savvy consumers are able to avoid these costs (such as through the use of proxy servers and Virtual Private Networks), most pay inflated prices for lower standard services and some will ultimately infringe.

“The Australian Government should make clear that it is not an infringement of Australia’s copyright system for consumers to circumvent geoblocking technology and should avoid international obligations that would preclude such practices,” it adds.

Anti-Geoblocking recommendation from the Commissionrecco-52

The Intellectual Property Arrangements final inquiry report is available here.

Note: An earlier version of this article referred to the Productivity Commission as an “agency”. That has been corrected to “advisory body”.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Court Protects BitTorrent Pirate From Overaggressive Filmmakers

Post Syndicated from Ernesto original https://torrentfreak.com/court-protects-bittorrent-pirate-from-overaggressive-filmmakers-161214/

cobblerIn recent years, file-sharers around the world have been pressured to pay significant settlement fees, or face legal repercussions.

These so-called “copyright trolling” efforts have been a common occurrence in the United States for more than half a decade.

The makers of the Adam Sandler movie The Cobbler are one of the parties actively involved in these practices. In one of their Oregon cases they recently settled with local resident Santos Cerritos, after a lengthy legal back-and-forth.

Cerritos eventually agreed to pay the statutory minimum damages of $750 and reasonable attorney fees. A substantial amount, but better than the $150,000 maximum damages rightsholders often want.

However, when the filmmakers announced their fees demand things took a turn for the worse. They wanted Cerritos to pay for their entire legal bill of $17,348, which is many times more than the damage award itself.

The accused pirate protested this request in court and in a recent ruling Oregon Magistrate Judge Stacie Beckerman agreed that the “fee-shifting” request is unreasonable.

The Judge notes that the damages amount in the settlement is already substantial and that it acts as a proper deterrent. That is enough. The defendant should not be required to fund the filmmakers’ copyright enforcement actions.

“In light of the substantial financial penalty already imposed, an attorney fee award is not necessary to deter further infringement, nor is a fee award necessary to encourage Plaintiff to continue to protect its rights, where Plaintiff has been vigilant to date and has the resources to police their copyright,” Judge Beckerman writes.

In a critical note, the Judge adds that these BitTorrent cases are creating results that are not in line with the goals of the Copyright Act. Instead, the threat of unreasonably high damages creates an unequal and unfair bargaining position.

“For this Court to award Plaintiff its attorney’s fees in this case would only contribute to the continued overaggressive assertion and negotiation of these Copyright Act claims,” she notes.

As a result of such overaggressive actions, several defendants have chosen not to defend themselves at all, opting for a default judgment instead. This isn’t purely in the interest of justice, but rather to exploit copyright law for commercial gain, the Judge suggests.

“A startling number of subscribers are failing to show up for Rule 45 depositions, and alleged infringers are more often than not choosing default judgments over litigation,” Judge Beckerman writes.

“By allowing this scenario to occur for several years now, the federal courts are not assisting in the administration of justice, but are instead enabling plaintiffs’ counsel and their LLC clients to receive a financial windfall by exploiting copyright law.”

Another argument against the high demand for attorney fees is the fact that the filmmakers unnecessarily prolonged the case. The case could have been settled early, but the rightsholder refused to do so, likely for financial reasons.

Keeping Cerritos’ financial position in mind, Judge Backerman doesn’t see it as appropriate to leave the defendant with more than $17,000 in debt that could have been avoided with an early settlement.

“If the Court were to force Cerritos to pay Plaintiff’s counsel his fee of $17,348.60, it would take Cerritos and his family years and years to satisfy that debt. It is a debt that was avoidable had counsel working together cooperatively to resolve this case.”

The Judge therefore denies the motion for attorney fees. While admitting that piracy is a problem, she doesn’t believe that these high costs are a burden an individual downloader should carry.

“Online piracy is a serious problem that demands meaningful solutions. Plaintiff has every right to enforce the copyright it holds, but not to demand that individual consumers who downloaded a single movie pay more than their share of the problem,” Judge Beckerman concludes.

Although Cerritos still lost the case and still owes the $750 in damages and $525 in other costs, he will be pleased with this outcome. Others who are in the same position will be glad too. It presents another hurdle to the ‘copyright trolls’ and makes it a little easier for their targets to fight similar demands.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Hosting Companies Dragged into Piracy Lawsuit Alongside Cloudflare

Post Syndicated from Andy original https://torrentfreak.com/hosting-companies-dragged-into-piracy-lawsuit-alongside-cloudflare-161126/

cloudflareFaced with non-cooperative ‘pirate’ sites, copyright holders have begun targeting web services with demands for them to stop serving errant platforms.

A lot of attention has focused on search engines, domain name registrars, and advertisers, who are frequently asked to do more to counter online piracy.

This summer, adult entertainment publisher ALS Scan took it up a notch, taking legal steps to hold several third-party services accountable for the actions of several pirate sites “with no apparent function other than to display infringing [ALS] adult content.”

In a complaint filed at a California federal court, ALS Scan targeted CloudFlare and the advertising network JuicyAds over image copyright infringement carried out by the users of pirate sites (full list below) they service.

“The pirate sites would not be able to thrive were it not for third party service providers who provide valuable services to these sites,” ALS wrote.

Last month, JuicyAds was cleared of any wrongdoing and the case against it was dismissed. However, Cloudflare is still a defendant and in an amended complaint filed earlier this month, other companies have now been dragged into the dispute.

First up is well-known hosting provider OVH, which made the headlines earlier this month when it was targeted by police seeking to shut down private tracker What.cd. ALS Scan says that OVH (based in France and Canada) is responsible for providing hosting and related services to pirate sites.

Also under fire is United States hosting provider Steadfast Networks. According to ALS, like OVH this Chicago-based company also hosts illegal sites, including “pirate” image hosting platform Imagebam.com. This is a very popular site indeed, currently ranked #680 in the world by SimilarWeb with more than 40m visits per month.

According to ALS, Dolphin Media Ltd is the Hong Kong-based company behind an image hosting site operating from Imgchilli.net. Again, ALS characterizes this as a pirate platform but instead of Dolphin merely being the host, it’s claimed the company also owns and operates the service.

Finally, ALS names Hivelocity Ventures as a new defendant. According to the adult outfit, Hivelocity hosts ‘pirate’ sites including namethatpornstar.com.

“The pirate sites would not be able to thrive were it not for third party service providers who provide valuable services to these sites. These third party providers include hosts and content delivery networks,” the amended complaint reads.

According to ALS, when Cloudflare learned of this lawsuit its lawyers contacted ALS offering to hand over the information it holds on the pirate sites in question, but only in exchange for a release of liability. While that doesn’t appear to have been granted, Cloudflare did begin to play ball.

“Eventually Cloudflare identified the OVH Companies as the primary host of some of the sites in question,” the company adds, noting that despite “numerous notifications of infringement”, OVH has continued to provide hosting services to pirate sites.

“On information and belief, the OVH Companies have failed to implement and enforce a repeat infringer policy,” ALS adds.

US-based host Steadfast Networks is subjected to the same criticism. The company allegedly received numerous infringement notifications on which it failed to act, and has failed to “implement or enforce a repeat infringer policy by removing Imagebam.com from its servers.”

In respect of ImgChilli and owner Dolphin, ALS has nothing good to say either.

“This is no site like dropbox.com, however, which caters to consumers who want to share family pictures or personal oversize files. Instead, Dolphin offers to pay imgchili.net members $4.50 per thousand views of images uploaded to imgchili.net,” the complaint reads.

“Dolphin is not offering to pay members money for page views of uploaded materials to encourage consumers to share pictures of their vacations. On information and belief, Dolphin provides monetary incentives to induce members to steal and upload massive galleries of infringing adult content.”

In summary, ALS says that while some of the defendants may claim safe harbor under the DMCA, they do not qualify for its protections.

“ALS denies that any would apply, but if they do, such safe harbors have been lost through ignoring red flags of infringement, ignoring actual notifications of infringement, failure to adopt and reasonably implement a repeat infringer policy and failure to accommodate, and interference with, standard technical measures,” the amended complaint reads.

If successful, ALS is demanding actual damages of no less than $10m, statutory damages, disgorgement of defendants’ profits, trebling of damages, costs and attorneys’ fees, plus preliminary and permanent injunctive relief.

The full list of the pirate sites in the complaint:

a. imgchili.net (Dolphin, Cloudflare, OVH)
b. namethatpornstar.com (Hivelocity)
c. slimpics.com (Cloudflare)
d. cumonmy.com (Cloudflare)
e. bestofsexpics.com (Cloudflare)
f. stooorage.com (Cloudflare, OVH)
g. greenpiccs.com (Cloudflare)
h. imagebam.com (Steadfast)
i. imgsen.se (Cloudflare)
j. imgspice.com (Cloudflare)
k. imgspot.org (Cloudflare)
l. img.yt (Cloudflare)
m. vipergirls.to (Cloudflare)
n. pornwire.net (Cloudflare)
o. fboom.me (Cloudflare)
p. imgflash.net (Cloudflare)
q. imgtrex.com (Cloudflare)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Monitor Cluster State with Amazon ECS Event Stream

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/monitor-cluster-state-with-amazon-ecs-event-stream/

Thanks to my colleague Jay Allen for this great blog on how to use the ECS Event stream for operational tasks.


In the past, in order to obtain updates on the state of a running Amazon ECS cluster, customers have had to rely on periodically polling the state of container instances and tasks using the AWS CLI or an SDK. With the new Amazon ECS event stream feature, it is now possible to retrieve near real-time, event-driven updates on the state of your Amazon ECS tasks and container instances. Events are delivered through Amazon CloudWatch Events, and can be routed to any valid CloudWatch Events target, such as an AWS Lambda function or an Amazon SNS topic.

In this post, I show you how to create a simple serverless architecture that captures, processes, and stores event stream updates. You first create a Lambda function that scans all incoming events to determine if there is an error related to any running tasks (for example, if a scheduled task failed to start); if so, the function immediately sends an SNS notification. Your function then stores the entire message as a document inside of an Elasticsearch cluster using Amazon Elasticsearch Service, where you and your development team can use the Kibana interface to monitor the state of your cluster and search for diagnostic information in response to issues reported by users.

Understanding the structure of event stream events

An ECS event stream sends two types of event notifications:

  • Task state change notifications, which ECS fires when a task starts or stops
  • Container instance state change notifications, which ECS fires when the resource utilization or reservation for an instance changes

A single event may result in ECS sending multiple notifications of both types. For example, if a new task starts, ECS first sends a task state change notification to signal that the task is starting, followed by a notification when the task has started (or has failed to start); additionally, ECS also fires container instance state change notifications when the utilization of the instance on which ECS launches the task changes.

Event stream events are sent using CloudWatch Events, which structures events as JSON messages divided into two sections: the envelope and the payload. The detail section of each event contains the payload data, and the structure of the payload is specific to the event being fired. The following example shows the JSON representation of a container state change event. Notice that the properties at to the top level of the JSON document describe event properties, such as the event name and time the event occurred, while the detail section contains the information about the task and container instance that triggered the event.

The following JSON depicts an ECS task state change event signifying that the essential container for a task running on an ECS cluster has exited, and thus the task has been stopped on the ECS cluster:

  "version": "0",
  "id": "8f07966c-b005-4a0f-9ee9-63d2c41448b3",
  "detail-type": "ECS Task State Change",
  "source": "aws.ecs",
  "account": "244698725403",
  "time": "2016-10-17T20:29:14Z",
  "region": "us-east-1",
  "resources": [
  "detail": {
    "clusterArn": "arn:aws:ecs:us-east-1:123456789012:cluster/eventStreamTestCluster",
    "containerInstanceArn": "arn:aws:ecs:us-east-1:123456789012:container-instance/f813de39-e42c-4a27-be3c-f32ebb79a5dd",
    "containers": [
        "containerArn": "arn:aws:ecs:us-east-1:123456789012:container/4b5f2b75-7d74-4625-8dc8-f14230a6ae7e",
        "exitCode": 1,
        "lastStatus": "STOPPED",
        "name": "web",
        "networkBindings": [
            "bindIP": "",
            "containerPort": 80,
            "hostPort": 80,
            "protocol": "tcp"
        "taskArn": "arn:aws:ecs:us-east-1:123456789012:task/cdf83842-a918-482b-908b-857e667ce328"
    "createdAt": "2016-10-17T20:28:53.671Z",
    "desiredStatus": "STOPPED",
    "lastStatus": "STOPPED",
    "overrides": {
      "containerOverrides": [
          "name": "web"
    "startedAt": "2016-10-17T20:29:14.179Z",
    "stoppedAt": "2016-10-17T20:29:14.332Z",
    "stoppedReason": "Essential container in task exited",
    "updatedAt": "2016-10-17T20:29:14.332Z",
    "taskArn": "arn:aws:ecs:us-east-1:123456789012:task/cdf83842-a918-482b-908b-857e667ce328",
    "taskDefinitionArn": "arn:aws:ecs:us-east-1:123456789012:task-definition/wpunconfiguredfail:1",
    "version": 3

Setting up an Elasticsearch cluster

Before you dive into the code for handling events, set up your Elasticsearch cluster. On the console, choose Elasticsearch Service, Create a New Domain. In Elasticsearch domain name, type elasticsearch-ecs-events, then choose Next.

For Step 2: Configure cluster, accept all of the defaults by choosing Next.

For Step 3: Set up access policy, choose Next. This page lets you establish a resource-based policy for accessing your cluster; to allow access to the cluster’s actions, use an identity-based policy associated with your Lambda function.

Finally, on the Review page, choose Confirm and create. This starts spinning up your cluster.

While your cluster is being created, set up the SNS topic and Lambda function you need to start capturing and issuing notifications about events.

Create an SNS topic

Because your Lambda function emails you when a task fails unexpectedly due to an error condition, you need to set up an Amazon SNS topic to which your Lambda function can write.

In the console, choose SNS, Create Topic. For Topic name, type ECSTaskErrorNotification, and then choose Create topic.

When you’re done, copy the Topic ARN value, and save it to a text editor on your local desktop; you need it to configure permissions for your Lambda function in the next step. Finally, choose Create subscription to subscribe to an email address for which you have access, so that you receive these event notifications. Remember to click the link in the confirmation email, or you won’t receive any events.

The eagle-eyed among you may realize that you haven’t given your future Lambda function permission to call your SNS topic. You grant this permission to the Lambda execution role when you create your Lambda function in the following steps.

Handling event stream events in a Lambda function

For the next step, create your Lambda function to capture events. Here’s the code for your function (written in Python 2.7):

import requests
import json
from requests_aws_sign import AWSV4Sign
from boto3 import session, client
from elasticsearch import Elasticsearch, RequestsHttpConnection

es_host = '<insert your own Amazon ElasticSearch endpoint here>'
sns_topic = '<insert your own SNS topic ARN here>'

def lambda_handler(event, context):
    # Establish credentials
    session_var = session.Session()
    credentials = session_var.get_credentials()
    region = session_var.region_name or 'us-east-1'

    # Check to see if this event is a task event and, if so, if it contains
    # information about an event failure. If so, send an SNS notification.
    if "detail-type" not in event:
        raise ValueError("ERROR: event object is not a valid CloudWatch Logs event")
        if event["detail-type"] == "ECS Task State Change":
            detail = event["detail"]
            if detail["lastStatus"] == "STOPPED":
                if detail["stoppedReason"] == "Essential container in task exited":
                  # Send an error status message.
                  sns_client = client('sns')
                      Subject="ECS task failure detected for container",

    # Elasticsearch connection. Note that you must sign your requests in order
    # to call the Elasticsearch API anonymously. Use the requests_aws_sign
    # package for this.
    service = 'es'
    auth=AWSV4Sign(credentials, region, service)
    es_client = Elasticsearch(host=es_host,

    es_client.index(index="ecs-index", doc_type="eventstream", body=event)

Break this down: First, the function inspects the event to see if it is a task change event. If so, it further looks to see if the event is reporting a stopped task, and whether that task stopped because one of its essential containers terminated. If these conditions are true, it sends a notification to the SNS topic that you created earlier.

Second, the function creates an Elasticsearch connection to your Amazon ES instance. The function uses the requests_aws_sign library to implement Sig4 signing because, in order to call Amazon ES, you need to sign all requests with the Sig4 signing process. After the Sig4 signature is generated, the function calls Amazon ES and adds the event to an index for later retrieval and inspection.

To get this code to work, your Lambda function must have permission to perform HTTP POST requests against your Amazon ES instance, and to publish messages to your SNS topic. Configure this by setting up your Lambda function with an execution role that grants the appropriate permission to these resources in your account.

To get started, you need to prepare a ZIP file for the above code that contains both the code and its prerequisites. Create a directory named lambda_eventstream, and save the code above to a file named lambda_function.py. In your favorite text editor, replace the es_host and sns_topic variables with your own Amazon ES endpoint and SNS topic ARN, respectively.
Next, on the command line (Linux, Windows or Mac), change to the directory that you just created, and run the following command for pip (the de facto standard Python installation utility) to download all of the required prerequisites for this code into the directory. You need to ship these dependencies with your code, as they are not pre-installed on the instance that runs your Lambda function.

NOTE: You need to be on a machine with Python and pip already installed. If you are using Python 2.7.9 or greater, pip is installed as part of your standard Python installation. If you are not using Python 2.7.9 or greater, consult the pip page for installation instructions.

pip install requests_aws_sign elasticsearch -t .

Finally, zip all of the contents of this directory into a single zip file. Make sure that the lambda-eventstream.py file is at the top of the file hierarchy within the zip file, and that it is not contained within another directory. From within the lambda-eventstream directory, you can use the following command on Linux and MacOS systems:

zip lambda-eventstream.zip *

On Windows clients with the 7-Zip utility installed, you can run the following command from PowerShell or, if you’re really so inclined, a command prompt:

7z a -tzip lambda-eventstream.zip *

Now that your function and its dependencies are properly packaged, install and test it. Navigate to the Lambda console, choose Create a Lambda Function, and then on the Select Blueprint page, choose Blank function. Choose Next on the Configure triggers screen; you wire up your function to your ECS event stream in the next section.

On the Configure function page, for Name, enter lambda-eventstream. For Runtime, choose Python 2.7. Under Lambda function code, for Code entry type, choose Upload a .ZIP file, and choose Upload to select the ZIP file that you just created.

Under Lambda function handler and role, for Role, choose Create a custom role. This opens a new window for configuring your policy. For IAM Role, choose Create a New IAM Role, and type a name. Then choose View Policy Document, Edit. Paste in the IAM policy below, making sure to replace every instance of AWSAccountID with your own AWS account ID.

          "Effect": "Allow",
          "Action": [
          "Resource": "arn:aws:es:us-east-1:<AWSAccountID>:domain/ecs-events-cluster/*"
            "Effect": "Allow",
            "Action": [
            "Resource": "arn:aws:sns:us-east-1:<AWSAccountID>:ECSTaskErrorNotification"        

This policy establishes every permission that your Lambda function requires for execution, including permission to:

  • Create a new CloudWatch Events log group, and save all outputs from your Lambda function to this group
  • Perform HTTP PUT commands on your Elasticsearch cluster
  • Publish messages to your SNS topic

When you’re done, you can test your configuration by scrolling up to the sample event stream message provided earlier in this post, and using it to test your Lambda function in the console. On the dashboard page for your new function, choose Test, and in the Input test event window, enter the JSON-formatted event from earlier.

Note that, if you haven’t correctly input your account ID in the correct places in your IAM policy file, you may receive a message along the lines of:

User: arn:aws:sts::123456789012:assumed-role/LambdaEventStreamTake2/awslambda_421_20161017203411268 is not authorized to perform: es:ESHttpPost on resource: ecs-events-cluster.

Edit the policy associated with your Lambda execution role in the IAM console and try again.

Send event stream events to your Lambda function

Almost there! Now with your SNS topic, Elasticsearch cluster, and Lambda function all in place, the only remaining element is to wire up your ECS event stream events and route them to your Lambda function. The CloudWatch Events console offer everything you need to set this up quickly and easily.

From the console, choose CloudWatch, Events. On Step 1: Create Rule, under Event selector, choose Amazon EC2 Container Service. CloudWatch Events enables you to filter by the type of message (task state change or container instance state change), as well as to select a specific cluster from which to receive events. For the purposes of this post, keep the default settings of Any detail type and Any cluster.

Under Targets, choose Lambda function. For Function, choose lambda-eventstream. Behind the scenes, this sends events from your ECS clusters to your Lambda function and also creates the service role required for CloudWatch Events to call your Lambda function.

Verify your work

Now it’s time to verify that messages sent from your ECS cluster flow through your Lambda function, trigger an SNS message for failed tasks, and are stored in your Elasticsearch cluster for future retrieval. To test this workflow, you can use the following ECS task definition, which attempts to start the official WordPress image without configuring an SQL database for storage:

    "taskDefinition": {
        "status": "ACTIVE",
        "family": "wpunconfiguredfail",
        "volumes": [],
        "taskDefinitionArn": "arn:aws:ecs:us-east-1:244698725403:task-definition/wpunconfiguredfail:1",
        "containerDefinitions": [
                "environment": [],
                "name": "web",
                "mountPoints": [],
                "image": "wordpress",
                "cpu": 99,
                "portMappings": [
                        "protocol": "tcp",
                        "containerPort": 80,
                        "hostPort": 80
                "memory": 100,
                "essential": true,
                "volumesFrom": []
        "revision": 1

Create this task definition using either the AWS Management Console or the AWS CLI, and then start a task from this task definition. For more detailed instructions, see Launching a Container Instance.

A few minutes after launching this task definition, you should receive an SNS message with the contents of the task state change JSON indicating that the task failed. You can also examine your Elasticsearch cluster in the console by selecting the name of your cluster and choosing Indicates, ecs-index. For Count, you should see that you have multiple records stored.

You can also search the messages that have been stored by opening up access to your Kibana endpoint. Kibana provides a host of visualization and search capabilities for data stored in Amazon ES. To open up access to Kibana to your computer, find your computer’s IP address, and then choose Modify access policy for your Elasticsearch cluster. For Set the domain access policy to, choose Allow access to the domain from specific IP(s) and enter your IP address.

(A more robust and scalable solution for securing Kibana is to front it with a proxy. Details on this approach can be found in Karthi Thyagarajan’s post How to Control Access to Your Amazon Elasticsearch Service Domain.)

You should now be able to kick the Kibana endpoint for your cluster, and search for messages stored in your cluster’s indexes.


After you have this basic, serverless architecture set up for consuming ECS cluster-related event notifications, the possibilities are limitless. For example, instead of storing the events in Amazon ES, you could store them in Amazon DynamoDB, and use the resulting tables to build a UI that materializes the current state of your clusters.

You could also use this information to drive container placement and scaling automatically, allowing you to “right-size” your clusters to a very granular level. By delivering cluster state information in near-real time using an event-driven model as opposed to a pull model, the new ECS event stream feature opens up a much wider array of possibilities for monitoring and scaling your container infrastructure.

If you have questions or suggestions, please comment below.

WTF Yahoo/FISA search in kernel?

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/wtf-yahoofisa-search-in-kernel.html

A surprising detail in the Yahoo/FISA email search scandal is that they do it with a kernel module. I thought I’d write up some (rambling) notes.

What the government was searching for

As described in the previoius blog post, we’ll assume the government is searching for the following string, and possibly other strings like it within emails:

### Begin ASRAR El Mojahedeen v2.0 Encrypted Message ###

I point this out because it’s simple search identifying things. It’s not natural language processing. It’s not searching for phrases like “bomb president”.

Also, it’s not AV/spam/childporn processing. Those look at different things. For example, filtering message containing childporn involves calculating a SHA2 hash of email attachments and looking up the hashes in a table of known bad content (or even more in-depth analysis). This is quite different from searching.

The Kernel vs. User Space

Operating systems have two parts, the kernel and user space. The kernel is the operating system proper (e.g. the “Linux kernel”). The software we run is in user space, such as browsers, word processors, games, web servers, databases, GNU utilities [sic], and so on.

The kernel has raw access to the machine, memory, network devices, graphics cards, and so on. User space has virtual access to these things. The user space is the original “virtual machines”, before kernels got so bloated that we needed a third layer to virtualize them too.

This separation between kernel and user has two main benefits. The first is security, controlling which bit of software has access to what. It means, for example, that one user on the machine can’t access another’s files. The second benefit is stability: if one program crashes, the others continue to run unaffected.

Downside of a Kernel Module

Writing a search program as a kernel module (instead of a user space module) defeats the benefits of user space programs, making the machine less stable and less secure.

Moreover, the sort of thing this module does (parsing emails) has a history of big gapping security flaws. Parsing stuff in the kernel makes cybersecurity experts run away screaming in terror.

On the other hand, people have been doing security stuff (SSL implementations and anti-virus scanning) in the kernel in other situations, so it’s not unprecedented. I mean, it’s still wrong, but it’s been done before.

Upside of a Kernel Module

If doing this is as a kernel module (instead of in user space) is so bad, then why does Yahoo do it? It’s probably due to the widely held, but false, belief that putting stuff in the kernel makes it faster.

Everybody knows that kernels are faster, for two reasons. First is that as a program runs, making a system call switches context, from running in user space to running in kernel space. This step is expensive/slow. Kernel modules don’t incur this expense, because code just jumps from one location in the kernel to another. The second performance issue is virtual memory, where reading memory requires an extra step in user space, to translate the virtual memory address to a physical one. Kernel modules access physical memory directly, without this extra step.

But everyone is wrong. Using features like hugepages gets rid of the cost of virtual memory translation cost. There are ways to mitigate the cost of user/kernel transitions, such as moving data in bulk instead of a little bit at a time. Also, CPUs have improved in recent years, dramatically reducing the cost of a kernel/user transition.

The problem we face, though, is inertia. Everyone knows moving modules into the kernel makes things faster. It’s hard getting them to un-learn what they’ve been taught.

Also, following this logic, Yahoo may already have many email handling functions in the kernel. If they’ve already gone down the route of bad design, then they’d have to do this email search as a kernel module as well, to avoid the user/kernel transition cost.

Another possible reason for the kernel-module is that it’s what the programmers knew how to do. That’s especially true if the contractor has experience with other kernel software, such as NSA implants. They might’ve read Phrack magazine on the topic, which might have been their sole education on the subject. [http://phrack.org/issues/61/13.html]

How it was probably done

I don’t know Yahoo’s infrastructure. Presumably they have front-end systems designed to balance the load (and accelerate SSL processing), and back-end systems that do the heavy processing, such as spam and virus checking.

The typical way to do this sort of thing (search) is simply tap into the network traffic, either as a separate computer sniffing (eavesdropping on) the network, or something within the system that taps into the network traffic, such as a netfilter module. Netfilter is the Linux firewall mechanism, and has ways to easily “hook” into specific traffic, either from user space or from a kernel module. There is also a related user space mechanism of hooking network APIs like recv() with a preload shared library.

This traditional mechanism doesn’t work as well anymore. For one thing, incoming email traffic is likely encrypted using SSL (using STARTTLS, for example). For another thing, companies are increasingly encrypting intra-data-center traffic, either with SSL or with hard-coded keys.

Therefore, instead of tapping into network traffic, the code might tap directly into the mail handling software. A good example of this is Sendmail’s milter interface, that allows the easy creation of third-party mail filtering applications, specifically for spam and anti-virus.

But it would be insane to write a milter as a kernel module, since mail handling is done in user space, thus adding unnecessary user/kernel transitions. Consequently, we make the assumption that Yahoo’s intra-data-center traffic in unencrypted, and that for FISA search thing, they wrote something like a kernel-module with netfilter hooks.

How it should’ve been done

Assuming the above guess is correct, that they used kernel netfilter hooks, there are a few alternatives.

They could do user space netfilter hooks instead, but they do have a performance impact. They require a transition from the kernel to user, then a second transition back into the kernel. If the system is designed for high performance, this might be a noticeable performance impact. I doubt it, as it’s still small compared to the rest of the computations involved, but it’s the sort of thing that engineers are prejudiced against, even before they measure the performance impact.

A better way of doing it is hooking the libraries. These days, most software uses shared libraries (.so) to make system calls like recv(). You can write your own shared library, and preload it. When the library function is called, you do your own processing, then call the original function.

Hooking the libraries then lets you tap into the network traffic, but without any additional kernel/user transition.

Yet another way is simple changes in the mail handling software that allows custom hooks to be written.

Third party contractors

We’ve been thinking in terms of technical solutions. There is also the problem of politics.

Almost certainly, the solution was developed by outsiders, by defense contractors like Booz-Allen. (I point them out because of the whole Snowden/Martin thing). This restricts your technical options.

You don’t want to give contractors access to your source code. Nor do you want to the contractors to be making custom changes to your source code, such as adding hooks. Therefore, you are looking at external changes, such as hooking the network stack.

The advantage of a netfilter hook in the kernel is that it has the least additional impact on the system. It can be developed and thoroughly tested by Booz-Allen, then delivered to Yahoo!, who can then install it with little effort.

This is my #1 guess why this was a kernel module – it allowed the most separation between Yahoo! and a defense contractor who wrote it. In other words, there is no technical reason for it — but a political reason.

Let’s talk search

There two ways to search things: using an NFA and using a DFA.

An NFA is the normal way of using regex, or grep. It allows complex patterns to be written, but it requires a potentially large amount of CPU power (i.e. it’s slow). It also requires backtracking within a message, thus meaning the entire email must be reassembled before searching can begin.

The DFA alternative instead creates a large table in memory, then does a single pass over a message to search. Because it does only a single pass, without backtracking, the message can be streamed through the search module, without needing to reassemble the message. In theory, anything searched by an NFA can be searched by a DFA, though in practice some unbounded regex expressions require too much memory, so DFAs usually require simpler patterns.

The DFA approach, by the way, is about 4-gbps per 2.x-GHz Intel x86 server CPU. Because no reassembly is required, it can tap directly into anything above the TCP stack, like netfilter. Or, it can tap below the TCP stack (like libpcap), but would require some logic to re-order/de-duplicate TCP packets, to present the same ordered stream as TCP.

DFAs would therefore require little or no memory. In contrast, the NFA approach will require more CPU and memory just to reassemble email messages, and the search itself would also be slower.

The naïve approach to searching is to use NFAs. It’s what most people start out with. The smart approach is to use DFAs. You see that in the evolution of the Snort intrusion detection engine, where they started out using complex NFAs and then over the years switched to the faster DFAs.

You also see it in the network processor market. These are specialized CPUs designed for things like firewalls. They advertise fast regex acceleration, but what they really do is just convert NFAs into something that is mostly a DFA, which you can do on any processor anyway. I have a low opinion of network processors, since what they accelerate are bad decisions. Correctly designed network applications don’t need any special acceleration, except maybe SSL public-key crypto.

So, what the government’s code needs to do is a very lightweight parse of the SMTP protocol in order to extract the from/to email addresses, then a very lightweight search of the message’s content in order to detect if any of the offending strings have been found. When the pattern is found, it then reports the addresses it found.


I don’t know Yahoo’s system for processing incoming emails. I don’t know the contents of the court order forcing them to do a search, and what needs to be secret. Therefore, I’m only making guesses here.

But they are educated guesses. In 9 times out of 10 in situations similar to Yahoo, I’m guessing that a “kernel module” would be the most natural solution. It’s how engineers are trained to think, and it would likely be the best fit organizationally. Sure, it really REALLY annoys cybersecurity experts, but nobody cares what we think, so that doesn’t matter.