Tag Archives: hosting

Three Men Sentenced Following £2.5m Internet Piracy Case

Post Syndicated from Andy original https://torrentfreak.com/three-men-sentenced-following-2-5m-internet-piracy-case-170622/

While legal action against low-level individual file-sharers is extremely rare in the UK, the country continues to pose a risk for those engaged in larger-scale infringement.

That is largely due to the activities of the Police Intellectual Property Crime Unit and private anti-piracy outfits such as the Federation Against Copyright Theft (FACT). Investigations are often a joint effort which can take many years to complete, but the outcomes can often involve criminal sentences.

That was the profile of another Internet piracy case that concluded in London this week. It involved three men from the UK, Eric Brooks, 43, from Bolton, Mark Valentine, 44, from Manchester, and Craig Lloyd, 33, from Wolverhampton.

The case began when FACT became aware of potentially infringing activity back in February 2011. The anti-piracy group then investigated for more than a year before handing the case to police in March 2012.

On July 4, 2012, officers from City of London Police arrested Eric Brooks’ at his home in Bolton following a joint raid with FACT. Computer equipment was seized containing evidence that Brooks had been running a Netherlands-based server hosting more than £100,000 worth of pirated films, music, games, software and ebooks.

According to police, a spreadsheet on Brooks’ computer revealed he had hundreds of paying customers, all recruited from online forums. Using PayPal or utilizing bank transfers, each paid money to access the server. Police mentioned no group or site names in information released this week.

“Enquiries with PayPal later revealed that [Brooks] had made in excess of £500,000 in the last eight years from his criminal business and had in turn defrauded the film and TV industry alone of more than £2.5 million,” police said.

“As his criminal enterprise affected not only the film and TV but the wider entertainment industry including music, games, books and software it is thought that he cost the wider industry an amount much higher than £2.5 million.”

On the same day police arrested Brooks, Mark Valentine’s home in Manchester had a similar unwelcome visit. A day later, Craig Lloyd’s home in Wolverhampton become the third target for police.

Computer equipment was seized from both addresses which revealed that the pair had been paying for access to Brooks’ servers in order to service their own customers.

“They too had used PayPal as a means of taking payment and had earned thousands of pounds from their criminal actions; Valentine gaining £34,000 and Lloyd making over £70,000,” police revealed.

But after raiding the trio in 2012, it took more than four years to charge the men. In a feature common to many FACT cases, all three were charged with Conspiracy to Defraud rather than copyright infringement offenses. All three men pleaded guilty before trial.

On Monday, the men were sentenced at Inner London Crown Court. Brooks was sentenced to 24 months in prison, suspended for 12 months and ordered to complete 140 hours of unpaid work.

Valentine and Lloyd were each given 18 months in prison, suspended for 12 months. Each was ordered to complete 80 hours unpaid work.

Detective Constable Chris Glover, who led the investigation for the City of London Police, welcomed the sentencing.

“The success of this investigation is a result of co-ordinated joint working between the City of London Police and FACT. Brooks, Valentine and Lloyd all thought that they were operating under the radar and doing something which they thought was beyond the controls of law enforcement,” Glover said.

“Brooks, Valentine and Lloyd will now have time in prison to reflect on their actions and the result should act as deterrent for anyone else who is enticed by abusing the internet to the detriment of the entertainment industry.”

While even suspended sentences are a serious matter, none of the men will see the inside of a cell if they meet the conditions of their sentence for the next 12 months. For a case lasting four years involving such large sums of money, that is probably a disappointing result for FACT and the police.

Nevertheless, the men won’t be allowed to enjoy the financial proceeds of their piracy, if indeed any money is left. City of London Police say the trio will be subject to a future confiscation hearing to seize any proceeds of crime.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Court Grants Subpoenas to Unmask ‘TVAddons’ and ‘ZemTV’ Operators

Post Syndicated from Ernesto original https://torrentfreak.com/court-grants-subpoenas-to-unmask-tvaddons-and-zemtv-operators-170621/

Earlier this month we broke the news that third-party Kodi add-on ZemTV and the TVAddons library were being sued in a federal court in Texas.

In a complaint filed by American satellite and broadcast provider Dish Network, both stand accused of copyright infringement, facing up to $150,000 for each offense.

While the allegations are serious, Dish doesn’t know the full identities of the defendants.

To find out more, the company requested a broad range of subpoenas from the court, targeting Amazon, Github, Google, Twitter, Facebook, PayPal, and several hosting providers.

From Dish’s request

This week the court granted the subpoenas, which means that they can be forwarded to the companies in question. Whether that will be enough to identify the people behind ‘TVAddons’ and ‘ZemTV’ remains to be seen, but Dish has cast its net wide.

For example, the subpoena directed at Google covers any type of information that can be used to identify the account holder of taacc14@gmail.com, which is believed to be tied to ZemTV.

The information requested from Google includes IP address logs with session date and timestamps, but also covers “all communications,” including GChat messages from 2014 onwards.

Similarly, Twitter is required to hand over information tied to the accounts of the users “TV Addons” and “shani_08_kodi” as well as other accounts linked to tvaddons.ag and streamingboxes.com. This also applies the various tweets that were sent through the account.

The subpoena specifically mentions “all communications, including ‘tweets’, Twitter sent to or received from each Twitter Account during the time period of February 1, 2014 to present.”

From the Twitter subpoena

Similar subpoenas were granted for the other services, tailored towards the information Dish hopes to find there. For example, the broadcast provider also requests details of each transaction from PayPal, as well as all debits and credits to the accounts.

In some parts, the subpoenas appear to be quite broad. PayPal is asked to reveal information on any account with the credit card statement “Shani,” for example. Similarly, Github is required to hand over information on accounts that are ‘associated’ with the tvaddons.ag domain, which is referenced by many people who are not directly connected to the site.

The service providers in question still have the option to challenge the subpoenas or ask the court for further clarification. A full overview of all the subpoena requests is available here (Exhibit 2 and onwards), including all the relevant details. This also includes several letters to foreign hosting providers.

While Dish still appears to be keen to find out who is behind ‘TVAddons’ and ‘ZemTV,’ not much has been heard from the defendants in question.

ZemTV developer “Shani” shut down his addon soon after the lawsuit was announced, without mentioning it specifically. TVAddons, meanwhile, has been offline for well over a week, without any notice in public about the reason for the prolonged downtime.

The court’s order granting the subpoenas and letters of request is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

BPI Breaks Record After Sending 310 Million Google Takedowns

Post Syndicated from Andy original https://torrentfreak.com/bpi-breaks-record-after-sending-310-million-google-takedowns-170619/

A little over a year ago during March 2016, music industry group BPI reached an important milestone. After years of sending takedown notices to Google, the group burst through the 200 million URL barrier.

The fact that it took BPI several years to reach its 200 million milestone made the surpassing of the quarter billion milestone a few months later even more remarkable. In October 2016, the group sent its 250 millionth takedown to Google, a figure that nearly doubled when accounting for notices sent to Microsoft’s Bing.

But despite the volumes, the battle hadn’t been won, let alone the war. The BPI’s takedown machine continued to run at a remarkable rate, churning out millions more notices per week.

As a result, yet another new milestone was reached this month when the BPI smashed through the 300 million URL barrier. Then, days later, a further 10 million were added, with the latter couple of million added during the time it took to put this piece together.

BPI takedown notices, as reported by Google

While demanding that Google places greater emphasis on its de-ranking of ‘pirate’ sites, the BPI has called again and again for a “notice and stay down” regime, to ensure that content taken down by the search engine doesn’t simply reappear under a new URL. It’s a position BPI maintains today.

“The battle would be a whole lot easier if intermediaries played fair,” a BPI spokesperson informs TF.

“They need to take more proactive responsibility to reduce infringing content that appears on their platform, and, where we expressly notify infringing content to them, to ensure that they do not only take it down, but also keep it down.”

The long-standing suggestion is that the volume of takedown notices sent would reduce if a “take down, stay down” regime was implemented. The BPI says it’s difficult to present a precise figure but infringing content has a tendency to reappear, both in search engines and on hosting sites.

“Google rejects repeat notices for the same URL. But illegal content reappears as it is re-indexed by Google. As to the sites that actually host the content, the vast majority of notices sent to them could be avoided if they implemented take-down & stay-down,” BPI says.

The fact that the BPI has added 60 million more takedowns since the quarter billion milestone a few months ago is quite remarkable, particularly since there appears to be little slowdown from month to month. However, the numbers have grown so huge that 310 billion now feels a lot like 250 million, with just a few added on top for good measure.

That an extra 60 million takedowns can almost be dismissed as a handful is an indication of just how massive the issue is online. While pirates always welcome an abundance of links to juicy content, it’s no surprise that groups like the BPI are seeking more comprehensive and sustainable solutions.

Previously, it was hoped that the Digital Economy Bill would provide some relief, hopefully via government intervention and the imposition of a search engine Code of Practice. In the event, however, all pressure on search engines was removed from the legislation after a separate voluntary agreement was reached.

All parties agreed that the voluntary code should come into effect two weeks ago on June 1 so it seems likely that some effects should be noticeable in the near future. But the BPI says it’s still early days and there’s more work to be done.

“BPI has been working productively with search engines since the voluntary code was agreed to understand how search engines approach the problem, but also what changes can and have been made and how results can be improved,” the group explains.

“The first stage is to benchmark where we are and to assess the impact of the changes search engines have made so far. This will hopefully be completed soon, then we will have better information of the current picture and from that we hope to work together to continue to improve search for rights owners and consumers.”

With more takedown notices in the pipeline not yet publicly reported by Google, the BPI informs TF that it has now notified the search giant of 315 million links to illegal content.

“That’s an astonishing number. More than 1 in 10 of the entire world’s notices to Google come from BPI. This year alone, one in every three notices sent to Google from BPI is for independent record label repertoire,” BPI concludes.

While it’s clear that groups like BPI have developed systems to cope with the huge numbers of takedown notices required in today’s environment, it’s clear that few rightsholders are happy with the status quo. With that in mind, the fight will continue, until search engines are forced into compromise. Considering the implications, that could only appear on a very distant horizon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

ACME v2 API Endpoint Coming January 2018

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org//2017/06/14/acme-v2-api.html

Let’s Encrypt will add support for the IETF-standardized ACME v2 protocol in January of 2018. We will be adding a new ACME v2 API endpoint alongside our existing ACME v1 protocol API endpoint. We are not setting an end-of-life date for our ACME v1 API at this time, though we recommend that people move to the ACME v2 endpoint as soon as possible once it’s available. For most subscribers, this will happen automatically via a hosting provider or normal ACME client software update.

The ACME protocol, initially developed by the team behind Let’s Encrypt, is at the very heart of the CA service we provide. It’s the primary way in which we interact with our subscribers so that they can get and manage certificates. The ACME v1 protocol we use today was designed to ensure that our validation, issuance, and management methods are fully automated, consistent, compliant, and secure. In these respects, the current ACME v1 protocol has served us well.

There are three primary reasons why we’re starting a transition to ACME v2.

First, ACME v2 will be an IETF standard, and it’s important to us that we support true standards. While ACME v1 is a well-documented public specification, developed in a relatively open manner by individuals from a number of different organizations (including Mozilla, the Electronic Frontier Foundation, and the University of Michigan), it did not benefit from having been developed within a standards body with a greater diversity of inputs and procedures based on years of experience. It was always our intent for ACME v1 to form the basis for an IETF standardization process.

Second, ACME v2 was designed with additional input from other CAs besides Let’s Encrypt, so it should be easier for other CAs to use. We want a standardized ACME to work for many CAs, and ACME v1, while usable by other CAs, was designed with Let’s Encrypt in particular in mind. ACME v2 should meet more needs.

Third, ACME v2 brings some technical improvements that will allow us to better serve our subscribers going forward.

We are not setting an end-of-life date for the ACME v1 protocol because we don’t yet have enough data to determine when would be an appropriate date. Once we’re confident that we can predict an appropriate end-of-life date for our ACME v1 API endpoint we’ll announce one.

ACME v2 is the result of great work by the ACME IETF working group. In particular, we were happy to see the ACME working group take into account the needs of other organizations that may use ACME in the future. Certificate issuance and management protocols are a critical component of the Web’s trust model, and the Web will be better off if CAs can use a standardized public protocol that has been thoroughly vetted.

We’d like to thank our community, including our sponsors, for making everything we did this past year possible. Please consider getting involved or making a donation. If your company or organization would like to sponsor Let’s Encrypt please email us at sponsor@letsencrypt.org.

Copyright Holders Keep Targeting Dead Torrent Sites

Post Syndicated from Ernesto original https://torrentfreak.com/copyright-holders-keep-targeting-dead-torrent-sites-170611/

Over the past year several major torrent sites have shut down, causing quite an uproar among file-sharers.

Interestingly, however, several copyright holders still appear to think that these sites are alive and kicking. That is, judging from the takedown notices they send to Google.

Publisher Penguin Random House is particularly forgetful. Through its anti-piracy partner Digimarc, the company has reported hundreds of ‘infringing’ KickassTorrents URLs. Not only was KAT shut down last summer, the reported URLs are no longer listed in Google’s search results either.

Penguin is not alone though. Other rightsholders such as Sony Music, Dreamroom Productions, Taylor & Francis Group, The University of Chicago Press and many others have made the same mistakes recently.

Over the past month alone Google has received 1,340 takedown notices for Kat.cr URLs and an additional 775 for the Kat.ph domain name.

The problem is not limited to KAT either. Torrentz.eu, another major torrent site that went offline last summer, is still being targeted at well.

For example, earlier this week Sony Pictures asked Google to remove a Torrentz.eu URL that linked to the series Community, even though it is no longer indexed. In just one month copyright holders sent Google 4,960 takedown requests for “dead” Torrentz URLs.

Recent takedown requests for Torrentz.eu

Apparently, the reporting outfits have failed to adjust their piracy monitoring bots for the changing torrent landscape.

The mistakes are likely due to automated keyword filters that scour sites and forums for links to hosting services. These bots don’t bother to check whether Google actually indexes the content, nor do they remove dead sites from their system.

While targeting dead KAT and Torrentz links is bad enough, things can get worse.

The iconic torrent search isoHunt.com shut down following a MPAA lawsuit in 2013, well over three years ago. Nonetheless, rightsholders still sent Google takedown notices for the site, more than a dozen a month actually.

Or what about BTJunkie. This torrent indexer closed its doors voluntarily more than half a decade ago. Dead or not, some copyright holders still manage to find infringing links in some of the darkest corners of the Internet.

Apparently, torrent users are far quicker to adapt to the changing landscape than the monitoring outfits of some copyright holders…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Cloudflare Fails to Limit Scope of Piracy Lawsuit

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-fails-to-limit-scope-of-piracy-lawsuit-170610/

cloudflareAs one of the leading CDN and DDoS protection services, Cloudflare is used by millions of websites across the globe.

This includes thousands of “pirate” sites, including the likes of The Pirate Bay and ExtraTorrent, which rely on the U.S.-based company to keep server loads down.

Many rightsholders have complained about CloudFlare’s involvement with these sites and last year adult entertainment publisher ALS Scan took things up a notch by dragging the company to court.

ALS Scan accused the CDN service of various counts of copyright and trademark infringement and listed 15 customers that used the Cloudflare’s servers to distribute infringing material.

Through an early motion, Cloudflare managed to have several counts dismissed, but the accusation of contributory copyright infringement remained.

Hoping to further limit the scope of the lawsuit, Cloudflare asked the California federal court to grant a summary motion that would exclude 14 of the 15 listed ‘pirate’ sites from the lawsuit, as the original sites are not hosted on U.S. servers.

The image hosting sites in question include imgchili.com, slimpics.com, bestofsexpics.com, greenpics.com, imgspot.org and imgsen.se, among others.

Cloudflare argued that in order to be contributing to copyright infringement, the ‘pirate’ sites have to be direct infringers, which isn’t the case if are they are hosted abroad as that would fall outside the scope of U.S. courts.

However, according to the Court, which ordered on the motion for partial summary judgment a few days ago, this argument doesn’t hold.

“Here, it is undisputed that cache copies of Cloudflare clients’ files are stored on Cloudflare’s data servers; it is also undisputed that some of those data servers are located in the United States,” the order (pdf) reads.

These cached files are the result of the pirate sites’ decisions to sign up and pay for Cloudflare’s services. This ties direct infringements to U.S. servers.

“Thus, to the extent cache copies of Plaintiff’s images have been stored on Cloudflare’s U.S. servers, the creation of those copies would be an act of direct infringement by a given host website within the United States,” the court adds.

The Court further clarified that unlike Cloudflare claimed, under U.S. law the company can be held liable for caching content of copyright infringing websites.

In addition, Cloudflare’s argument that “infrastructure-level caching” is a type of fair use was denied as well.

Based on a detailed analysis of all the arguments provided, the Court concludes that the motion for summary judgment is denied for 13 of the 14 contended sites. This means than Cloudflare has to defend itself against the associated copyright infringement claims in an eventual trial.

The lawsuit is a crucial matter for Cloudflare, and not only because of the potential damages it faces in this case. If Cloudflare loses, other rightsholders are likely to make similar demands, forcing the company to actively police potential pirate sites.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Symantec Patent Protects Torrent Users Against Malware

Post Syndicated from Ernesto original https://torrentfreak.com/symantec-patent-protects-torrent-users-against-malware-170606/

In recent years we have documented a wide range of patent applications, several of which had a clear anti-piracy angle.

Symantec Corporation, known for the popular anti-virus software Norton Security, is taking a more torrent-friendly approach. At least, that’s what a recently obtained patent suggests.

The patent describes a system that can be used to identify fake torrents and malware-infected downloads, which are a common problem on badly-moderated torrent sites. Downloaders of these torrents are often redirected to scam websites or lured into installing malware.

Here’s where Symantec comes in with their automatic torrent moderating solution. Last week the company obtained a patent for a system that can rate the trustworthiness of torrents and block suspicious content to protect users.

“While the BitTorrent protocol represents a popular method for distributing files, this protocol also represents a common means for distributing malicious software. Unfortunately, torrent hosting sites generally fail to provide sufficient information to reliably predict whether such files are trustworthy,” the patent reads.

Unlike traditional virus scans, where the file itself is scanned for malicious traits, the patented technology uses a reputation score to make the evaluation.

The trustworthiness of torrents is determined by factors including the reputation of the original uploaders, torrent sites, trackers and other peers. For example, if an IP-address of a seeder is linked to several malicious torrents, it will get a low reputation score.

“For example, if an entity has been involved in several torrent transactions that involved malware-infected target files, the reputation information associated with the entity may indicate that the entity has a poor reputation, indicating a high likelihood that the target file represents a potential security risk,” Symantec notes.

In contrast, if a torrent is seeded by a user that only shares non-malicious files, the trustworthiness factor goes up.

Reputation information

If a torrent file has a high likelihood of being linked to malware or other malicious content, the system can take appropriate “security actions.” This may be as simple as deleting the suspicious torrent, or a more complex respone such as blocking all related network traffic.

“Examples of such security actions include, without limitation, alerting a user of the potential security risk, blocking access to the target file until overridden by the user, blocking network traffic associated with the torrent transaction, quarantining the target file, and/or deleting the target file,” Symantec writes.

Security actions

Symantec Corporation applied for the pattern nearly four years ago, but thus far we haven’t seen it used in the real world.

Many torrent users would likely appreciate an extra layer of security, although they might be concerned about overblocking and possible monitoring of their download habits. This means that, for now, they will have to rely on site moderators, and most importantly, common sense.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Нов сървър, нов късмет

Post Syndicated from Боян Юруков original https://yurukov.net/blog/2017/nov-sarvar/

Както писах вече в социалките, през последните дни обнових сървъра, на който се намира този и останалите ми сайтове. Хостингът ми все още е Superhosting.bg и нямам намерение да се местя. Наложи се обаче да обновя операционната система на самия сървър и за целта трябваше да прехвърля всичките си сайтове, както и процесите отварящи данни от страниците на държавната администрация.

Единственото нещо, което се е промени при блога е, че вече работи под HTTPS. Това означава, че връзката е криптирана. Макар да няма чувствителна информация на този сайт, минавайки на HTTPS е добра практика по принцип. Glasuvam.org, например, отдавна има сертификат. Скоро ще добавя такъв и на останалите сайтове, както и на мейл сървъра. При събирането на данните като тези за GovAlert.eu и производството на ток имаше пауза от около 10 мин. При данните за замърсяването на въздуха и раждаемостта нямаше отражение това.

Изчистих доста неща от сървъра – стари сайтове, тестове, пробни инсталации, „временни“ файлове, които съм споделял. Затова може да забележите, че някои линкове не работят. Ще възстановявам някои от файловете, ако реша, че са нужни. Ще се радвам да ми пишете, ако забележите проблеми с този сайт или другите услуги.

How NAGRA Fights Kodi and IPTV Piracy

Post Syndicated from Andy original https://torrentfreak.com/how-nagra-fights-kodi-and-iptv-piracy-170603/

Nagravision or NAGRA is one of the best known companies operating in the digital cable and satellite television content security space. Due to successes spanning several decades, the company has often proven unpopular with pirates.

In particular, Nagravision encryption systems have regularly been a hot topic for discussion on cable and satellite hacking forums, frustrating those looking to receive pay TV services without paying the high prices associated with them. However, the rise of the Internet is now presenting new challenges.

NAGRA still protects traditional cable and satellite pay TV services in 2017; Virgin Media in the UK is a long-standing customer, for example. But the rise of Internet streaming means that pirate content can now be delivered to the home with ease, completely bypassing the entire pay TV provider infrastructure. And, by extension, NAGRA’s encryption.

This means that NAGRA has been required to spread its wings.

As reported in April, NAGRA is establishing a lab to monitor and detect unauthorized consumption of content via set-top boxes, websites and other streaming platforms. That covers the now omnipresent Kodi phenomenon, alongside premium illicit IPTV services. TorrentFreak caught up with the company this week to find out more.

“NAGRA has an automated monitoring platform that scans all live channels and VOD assets available on Kodi,” NAGRA’s Ivan Schnider informs TF.

“The service we offer to our customers automatically finds illegal distribution of their content on Kodi and removes infringing streams.”

In the first instance, NAGRA sends standard takedown notices to hosting services to terminate illicit streams. The company says that while some companies are very cooperative, others are less so. When meeting resistance, NAGRA switches to more coercive methods, described here by Christopher Schouten, NAGRA Senior Director Product Marketing.

“Takedowns are generally sent to streaming platforms and hosting servers. When those don’t work, Advanced Takedowns allow us to use both technical and legal means to get results,” Schouten says.

“Numerous stories in recent days show how for instance popular Kodi plug-ins have been removed by their authors because of the mere threat of legal actions like this.”

At the center of operations is NAGRA’s Piracy Intelligence Portal, which offers customers a real-time view of worldwide online piracy trends, information on the infrastructure behind illegal services, as well as statistics and status of takedown requests.

“We measure takedown compliance very carefully using our Piracy Intelligence Portal, so we can usually predict the results we will get. We work on a daily basis to improve relationships and interfaces with those who are less compliant,” Schouten says.

The Piracy Intelligence Portal

While persuasion is probably the best solution, some hosts inevitably refuse to cooperate. However, NAGRA also offers the NexGuard system, which is able to determine the original source of the content.

“Using forensic watermarking to trace the source of the leak, we will be able to completely shut down the ‘leak’ at the source, independently and within minutes of detection,” Schouten says.

Whatever route is taken, NAGRA says that the aim is to take down streams as quickly as possible, something which hopefully undermines confidence in pirate services and encourages users to re-enter the legal market. Interestingly, the company also says it uses “technical means” to degrade pirate services to the point that consumers lose faith in them.

But while augmented Kodi setups and illicit IPTV are certainly considered a major threat in 2017, they are not the only problem faced by content companies.

While the Apple platform is quite tight, the open nature of Android means that there are a rising number of apps that can be sideloaded from the web. These allow pirate content to be consumed quickly and conveniently within a glossy interface.

Apps like Showbox, MovieHD and Terrarium TV have the movie and TV show sector wrapped up, while the popular Mobdro achieves the same with live TV, including premium sports. Schnider says NAGRA can handle apps like these and other emerging threats in a variety of ways.

“In addition to Kodi-related anti-piracy activities, NAGRA offers a service that automatically finds illegal distribution of content on Android applications, fully loaded STBs, M3U playlist and other platforms that provide plug-and-play solutions for the big TV screen; this service also includes the removal of infringing streams,” he explains.

M3U playlist piracy doesn’t get a lot of press. An M3U file is a text file that specifies locations where content (such as streams) can be found online.

In its basic ‘free’ form, it’s simply a case of finding an M3U file on an indexing site or blog and loading it into VLC. It’s not as flashy as any of the above apps, and unless one knows where to get the free M3Us quickly, many channels may already be offline. Premium M3U files are widely available, however, and tend to be pretty reliable.

But while attacking sources of infringing content is clearly a big part of NAGRA’s mission, the company also deploys softer strategies for dealing with pirates.

“Beyond disrupting pirate streams, raising awareness amongst users that these services are illegal and helping service providers deliver competing legitimate services, are also key areas in the fight against premium IPTV piracy where NAGRA can help,” Schnider says.

“Converting users of such services to legitimate paying subscribers represents a significant opportunity for content owners and distributors.”

For this to succeed, Schouten says there needs to be an understanding of the different motivators that lead an individual to commit piracy.

“Is it price? Is it availability? Is it functionality?” he asks.

Interestingly, he also reveals that lots of people are spending large sums of money on IPTV services they believe are legal but are not. Rather than the high prices putting them off, they actually add to their air of legitimacy.

“These consumers can relatively easily be converted into paying subscribers if they can be convinced that pay-TV services offer superior quality, reliability, and convenience because let’s face it, most IPTV services are still a little dodgy to use,” he says.

“Education is also important; done through working with service providers to inform consumers through social media platforms of the risks linked to the use of illegitimate streaming devices / IPTV devices, e.g. purchasing boxes that may no longer work after a short period of time.”

And so the battle over content continues.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

FUNimation Targets ‘Pirate’ Streaming Site KissAnime

Post Syndicated from Ernesto original https://torrentfreak.com/funimation-targets-pirate-streaming-site-kissanime-170601/

American anime distributor FUNimation is no stranger to hunting down pirates.

Headquartered in Texas, the company targeted 1337 alleged BitTorrent downloaders of the anime series “One Piece” at a local court a few years ago.

While the company no longer targets individual users through the U.S. legal system, it now appears to have its eyes set on a higher profile target, the popular anime streaming site KissAnime.

With millions of pageviews per day, KissAnime is the go-to site for many anime fans. The site is listed among the 250 most visited websites in the United States, making it one of the largest unauthorized streaming platforms in the world.

This is a thorn in the side of FUNimation, which recently obtained a DMCA subpoena to unmask part of the site’s infrastructure. Like many other streaming portals, KissAnime uses Google’s servers to host videos. These videos are served through CDN links, presumably to make them harder to take down.

FUNimation traced a CDN IP-address, used by KissAnime to stream pirated “One Piece” content, back to U.S. cloud hosting platform DigitalOcean, and asked the company to disable the associated link.

“Through our investigations, we have a good faith belief that a web server for which Digital Ocean, Inc. provides service, located at 138.68.244.174, is being used for the unauthorized copying and distribution […] of digital files embodying the Property,” FUNimation lawyer Evan Stone recently wrote to the company.

“FUNimation hereby requests that Digital Ocean expeditiously causes all such infringing materials to be removed or blocked or freezes the account at issue until the account holder removes all infringing materials or disables access thereto.”

FUNimation DMCA notice sent to Digital Ocean

Although KissAnime isn’t specifically mentioned in the DMCA notice or the subpoena request, a source close to the issue informs TorrentFreak that the IP-address in question is linked to the anime streaming site.

Because the CDN links keep rotating, FUNimation now wants to know the name of the customer that’s connected to the IP-address in question. The company therefore requested a DMCA subpoena from a federal court in Texas, which was granted earlier this month.

The subpoena orders DigitalOcean to hand over any and all contact information they have on the customer linked to the offending IP-address.

The DMCA subpoena

To find out what FUNimation intends to do with the information, provided that DigitalOcean will hand it over, we contacted the company’s lawyer Evan Stone. He couldn’t confirm the target but noted that it’s not about an end-user.

“We are targeting someone associated with disseminating infringing content on a MASSIVE scale, for profit. This is not a prelude to an end-user lawsuit, nor does this involve your typical fan uploader,” Stone told TF.

It’s likely that Funimation will pursue further action against the DigitalOcean customer associated with the pirates KissAnime streams. Whether this will be a central player or someone only remotely connected to the site remains unknown for now.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

EU Piracy Filter Proposals Being Sabotaged Says MEP Julia Reda

Post Syndicated from Andy original https://torrentfreak.com/eu-piracy-filter-proposals-being-sabotaged-says-mep-julia-reda-170601/

After complaining about “rogue” sites and services for more than 15 years, the music business is now concentrating on the so-called “value gap”.

The theory is that platforms like YouTube are able to avoid paying expensive licensing fees for music by exploiting the safe harbor protections of the DMCA and similar legislation. Effectively, pirate music uploaded by site users becomes available to the public at no cost to the platform and due to safe harbor rules, there is no legal recourse for the labels.

To close this loophole, the EU is currently moving forward with reforms that could limit the protections currently enjoyed by platforms like YouTube. In short, sites that allow users to upload content will be forced to partner up with content providers to aggressively filter all user uploads for infringing content, thus limiting the number of infringing works eventually communicated to the public.

Even as they stand the proposals are being heavily protested (1,2,3) but according to Member of the European Parliament Julia Reda, a new threat has appeared on the horizon.

Ahead of a crucial June 8 vote on how to move forward, Reda says that some in the corridors of power are now “resorting to dirty tactics” to defend and extend the already “disastrous plans” by any means.

Specifically, Reda accuses MEP Pascal Arimont from the European People’s Party (EPP) of trying to sabotage the Parliamentary process, by going behind negotiators’ backs and pushing a new filtering proposal text that makes the “original bad proposal look tame in comparison.”

Reda says that in the face of other MEPs’ efforts to come up with a compromise text upon which all of them are agreed, Arimont has been encouraging some MEPs to rebel against their negotiators. He wants them to support his own super-aggressive “alternative compromise” text that shows disregard for the Charter of Fundamental Rights and principles of EU law.

Arimont’s text is certainly an interesting read and a document that could have been formulated by the record labels themselves. It tightens just about every aspect of the text proposed by the Commission while running all over the compromise text put together by Reda and other MEPs.

For example, where others are agreed on the phrase “Where information society
service providers store and provide access to the public to copyright protected works or other subject-matter uploaded by their users”, Arimont’s text removes the key word “store”.

This means that his filtering demands go beyond sites like YouTube that actually host content, to encompass those that merely carry links. It doesn’t take much imagination to see the potential for chaos there.

Also, where the Commission is happy with the proposed rules only affecting sites that store and provide access to “large amounts” of copyright protected works uploaded by users, Arimont wants the “store” part removed and “large” changed to “significant”.

“[Arimont] doesn’t want [filtering rules] to just apply to services hosting ‘large amounts’ of copyrighted content, as proposed by the Commission, but to any service facilitating the availability of such content, even if the service is not actually hosting anything at all,” Reda explains.

The text also ignores proposals by MEPs that anti-piracy measures to be taken by platforms should be proportionate to their profit and size. That being said, Arimont does accept that start-ups would probably face “insurmountable financial obstacles” if required to deploy filtering technologies, so he proposes they should be exempt.

While that sounds reasonable, any business that’s over five years old would need to comply and Reda warns that the threshold could be set particularly low.

“So if you’ve been self-employed for more than 5 years, rules the Commission wrote with the likes of YouTube and Facebook in mind would suddenly also apply to your personal website,” she warns.

But Arimont’s proposal goes further still and has the potential to have privacy advocates up in arms.

In order to check that all user uploaded content is non-infringing, platforms would necessarily be required to check every single piece of data uploaded by users. This raises considerable privacy concerns and potential conflicts with EU law, for instance with Article 15 of the E-Commerce Directive, which prohibits general monitoring obligations for service providers.

Indeed, during the Netlog filtering case that went before the EU Court of Justice (CJEU) in 2012, the Court held that requiring an online platform to install broad piracy filters is incompatible with EU law.

Nevertheless, Arimont sees bridging the “value gap” as somehow different.

“The use of technical measures is essential for the functioning of online licensing and rights management purposes. Such technical measures therefore do not require the identity of uploaders and hence do not pose any risk for privacy of individual end users,” his proposal reads.

“Furthermore, those technical measures involve a highly targeted technical cooperation of rightholders and information society service providers based on the data provided by rightholders, and therefore do not lead to general obligation to monitor and find facts about the content.”

But what should really raise alarm bells for user-uploaded content platforms is how Arimont proposes to strip them of their safe harbor protections, if they optimize the presentation of that content to users. That, as Reda points out, could be something as benign as listing content in alphabetical order.

Julia Reda’s article has some information at the end for those who want to protest Arimont’s proposals (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Hot Startups – May 2017

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-may-2017/

April showers bring May startups! This month we have three hot startups for you to check out. Keep reading to find out what they’re up to, and how they’re using AWS to do it.

Today’s post features the following startups:

  • Lobster – an AI-powered platform connecting creative social media users to professionals.
  • Visii – helping consumers find the perfect product using visual search.
  • Tiqets – a curated marketplace for culture and entertainment.

Lobster (London, England)

Every day, social media users generate billions of authentic images and videos to rival typical stock photography. Powered by Artificial Intelligence, Lobster enables brands, agencies, and the press to license visual content directly from social media users so they can find that piece of content that perfectly fits their brand or story. Lobster does the work of sorting through major social networks (Instagram, Flickr, Facebook, Vk, YouTube, and Vimeo) and cloud storage providers (Dropbox, Google Photos, and Verizon) to find media, saving brands and agencies time and energy. Using filters like gender, color, age, and geolocation can help customers find the unique content they’re looking for, while Lobster’s AI and visual recognition finds images instantly. Lobster also runs photo challenges to help customers discover the perfect image to fit their needs.

Lobster is an excellent platform for creative people to get their work discovered while also protecting their content. Users are treated as copyright holders and earn 75% of the final price of every sale. The platform is easy to use: new users simply sign in with an existing social media or cloud account and can start showcasing their artistic talent right away. Lobster allows users to connect to any number of photo storage sources so they’re able to choose which items to share and which to keep private. Once users have selected their favorite photos and videos to share, they can sit back and watch as their work is picked to become the signature for a new campaign or featured on a cool website – and start earning money for their work.

Lobster is using a variety of AWS services to keep everything running smoothly. The company uses Amazon S3 to store photography that was previously ordered by customers. When a customer purchases content, the respective piece of content must be available at any given moment, independent from the original source. Lobster is also using Amazon EC2 for its application servers and Elastic Load Balancing to monitor the state of each server.

To learn more about Lobster, check them out here!

Visii (London, England)

In today’s vast web, a growing number of products are being sold online and searching for something specific can be difficult. Visii was created to cater to businesses and help them extract value from an asset they already have – their images. Their SaaS platform allows clients to leverage an intelligent visual search on their websites and apps to help consumers find the perfect product for them. With Visii, consumers can choose an image and immediately discover more based on their tastes and preferences. Whether it’s clothing, artwork, or home decor, Visii will make recommendations to get consumers to search visually and subsequently help businesses increase their conversion rates.

There are multiple ways for businesses to integrate Visii on their website or app. Many of Visii’s clients choose to build against their API, but Visii also work closely with many clients to figure out the most effective way to do this for each unique case. This has led Visii to help build innovative user interfaces and figure out the best integration points to get consumers to search visually. Businesses can also integrate Visii on their website with a widget – they just need to provide a list of links to their products and Visii does the rest.

Visii runs their entire infrastructure on AWS. Their APIs and pipeline all sit in auto-scaling groups, with ELBs in front of them, sending things across into Amazon Simple Queue Service and Amazon Aurora. Recently, Visii moved from Amazon RDS to Aurora and noted that the process was incredibly quick and easy. Because they make heavy use of machine learning, it is crucial that their pipeline only runs when required and that they maximize the efficiency of their uptime.

To see how companies are using Visii, check out Style Picker and Saatchi Art.

Tiqets (Amsterdam, Netherlands)

Tiqets is making the ticket-buying experience faster and easier for travelers around the world.  Founded in 2013, Tiqets is one of the leading curated marketplaces for admission tickets to museums, zoos, and attractions. Their mission is to help travelers get the most out of their trips by helping them find and experience a city’s culture and entertainment. Tiqets partners directly with vendors to adapt to a customer’s specific needs, and is now active in over 30 cities in the US, Europe, and the Middle East.

With Tiqets, travelers can book tickets either ahead of time or at their destination for a wide range of attractions. The Tiqets app provides real-time availability and delivers tickets straight to customer’s phones via email, direct download, or in the app. Customers save time skipping long lines (a perk of the app!), save trees (don’t need to physically print tickets), and most importantly, they can make the most out of their leisure time. For each attraction featured on Tiqets, there is a lot of helpful information including best modes of transportation, hours, commonly asked questions, and reviews from other customers.

The Tiqets platform consists of the consumer-facing website, the internal and external-facing APIs, and the partner self-service portals. For the app hosting and infrastructure, Tiqets uses AWS services such as Elastic Load Balancing, Amazon EC2, Amazon RDS, Amazon CloudFront, Amazon Route 53, and Amazon ElastiCache. Through the infrastructure orchestration of their AWS configuration, they can easily set up separate development or test environments while staying close to the production environment as well.

Tiqets is hiring! Be sure to check out their jobs page if you are interested in joining the Tiqets team.

Thanks for reading and don’t forget to check out April’s Hot Startups if you missed it.

-Tina Barr

 

 

Google Has a Hard Time Keeping Streaming Pirates at Bay

Post Syndicated from Ernesto original https://torrentfreak.com/google-has-a-hard-time-keeping-streaming-pirates-at-bay-170527/

Pirate streaming sites and services are booming.

Whether through traditional websites, apps or dedicated pirate boxes, streaming TV-shows and movies in high quality has never been so easy.

Unwittingly, Google plays a significant role in the shady part of online media distribution. As we highlighted earlier this year and long before, many pirate sites and servers exploit Google’s servers.

By using simple tricks, pirate site operators have found a way to stream videos directly from Google Drive and various other sources, often complete with subtitles and Chromecast support.

The Boss Baby streaming from Googlevideo.com

The videos in question are streamed from the Googlevideo.com domain, as pictured above, which is increasingly being noticed by rightsholders as well.

If we look at Google’s Transparency Report, which only applies to search, we see that roughly 13,000 of these URLs were reported until the end of last year. In 2017 this number exploded, with over a quarter million reported URLs so far, 265,000 at the time of writing.

Reported Googlevideo.com URLs

Why these URLs are being reported to Google search isn’t clear, because they don’t appear in the search engine. Also, many of the URLs have special parameters and only work if they are played from the pirate streaming sites.

That said, the massive surge in reports shows that the issue is a serious problem for rightsholders. For their part, pirate sites are happy to keep things the way they are as Google offers a reliable hosting platform that’s superior to many alternatives.

The question remains why Google has a hard time addressing the situation. It is no secret that the company uses hash matching to detect and block pirated content on Google Drive, but apparently, this doesn’t prevent a constant stream of pirated videos from entering its servers.

TorrentFreak reached out to Google for a comment on the situation. A company spokesperson informed us that they would look into the matter, but a few days have passed and we have yet to hear back.

Interestingly, while we were writing this article, reports started coming in that Google had begun to terminate hundreds, if not thousands of “unlimited” Drive accounts, which were sold through business plan resellers.

These accounts are actively traded on eBay, even though reselling business Drive accounts is strictly forbidden. Many of these accounts are also linked to streaming hosts, so it could be that this is Google’s first step to getting a tighter grip on the situation.

To be continued…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Now Anyone Can Embed a Pirate Movie in a Website

Post Syndicated from Andy original https://torrentfreak.com/now-anyone-can-embed-a-pirate-movie-in-a-website-170522/

While torrents are still the go-to source for millions of users seeking free online media, people are increasingly seeking the immediacy and convenience of web-based streaming.

As a result, hundreds of websites have appeared in recent years, offering Netflix-inspired interfaces that provide an enhanced user experience over the predominantly text-based approach utilized by most torrent sites.

While there hasn’t been a huge amount of innovation in either field recently, a service that raised its head during recent weeks is offering something new and potentially significant, if it continues to deliver on its promises without turning evil.

Vodlocker.to is the latest in a long list of sites using the Vodlocker name, which is bound to cause some level of confusion. However, what this Vodlocker variant offers is a convenient way for users to not only search for and find movies hosted on the Internet, but stream them instantly – with a twist.

After entering a movie’s IMDb code (the one starting ‘tt’) in a box on the page, Vodlocker quickly searches for the movie on various online hosting services, including Google Drive.

Entering the IMDb code

“We believe the complexity of uploading a video has become unnecessary, so we have created much like Google, an automated crawler that visits millions of pages every day to find all videos on the internet,” the site explains.

As shown in the image above, the site takes the iMDb number and generates code. That allows the user to embed an HTML5 video player in their own website, which plays the movie in question. We tested around a dozen movies with a 100% success rate, with search times from a couple of seconds to around 20 seconds maximum.

A demo on the site shows exactly how the embed code currently performs, with the video player offering the usual controls such as play and pause, with a selector for quality and volume levels. The usual ‘full screen’ button sits in the bottom right corner.

The player can be embedded anywhere

Near the top of the window are options for selecting different sources for the video, should it become unplayable or if a better quality version is required. Interestingly, should one of those sources be Google Video, Vodlocker says its player offers Chromecast and subtitle support.

“Built-in chromecast plugin streams free HD movies/tv shows from your website to your TV via Google Chromecast. Built-in opensubtitles.org plugin finds subtitles in all languages and auto-selects your language,” the site reports.

In addition to a link-checker that aims to exclude broken links (missing sources), the service also pulls movie-related artwork from IMDb, to display while the selected movie is being prepared for streaming.

The site is already boasting a “massive database” of movies, which will make it of immediate use to thousands of websites that might want to embed movies or TV shows in their web pages.

As long as Vodlocker can cope with the load, this could effectively spawn a thousand new ‘pirate’ websites overnight but the service generally seems more suited to smaller, blog-like sites that might want to display a smaller selection of titles.

That being said, it’s questionable whether a site would seek to become entirely reliant on a service like this. While the videos it indexes are more decentralized, the service itself could be shut down in the blink of an eye, at which point every link stops working.

It’s also worth noting that the service uses IFrame tags, which some webmasters might feel uncomfortable about deploying on their sites due to security concerns.

The New Vodlocker API demo can be found here, for as long as it lasts.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How to Control TLS Ciphers in Your AWS Elastic Beanstalk Application by Using AWS CloudFormation

Post Syndicated from Paco Hope original https://aws.amazon.com/blogs/security/how-to-control-tls-ciphers-in-your-aws-elastic-beanstalk-application-by-using-aws-cloudformation/

Securing data in transit is critical to the integrity of transactions on the Internet. Whether you log in to an account with your user name and password or give your credit card details to a retailer, you want your data protected as it travels across the Internet from place to place. One of the protocols in widespread use to protect data in transit is Transport Layer Security (TLS). Every time you access a URL that begins with “https” instead of just “http”, you are using a TLS-secured connection to a website.

To demonstrate that your application has a strong TLS configuration, you can use services like the one provided by SSL Labs. There are also open source, command-line-oriented TLS testing programs such as testssl.sh (which I do not cover in this post) and sslscan (which I cover later in this post). The goal of testing your TLS configuration is to provide evidence that weak cryptographic ciphers are disabled in your TLS configuration and only strong ciphers are enabled. In this blog post, I show you how to control the TLS security options for your secure load balancer in AWS CloudFormation, pass the TLS certificate and host name for your secure AWS Elastic Beanstalk application to the CloudFormation script as parameters, and then confirm that only strong TLS ciphers are enabled on the launched application by testing it with SSLLabs.

Background

In some situations, it’s not enough to simply turn on TLS with its default settings and call it done. Over the years, a number of vulnerabilities have been discovered in the TLS protocol itself with codenames such as CRIME, POODLE, and Logjam. Though some vulnerabilities were in specific implementations, such as OpenSSL, others were vulnerabilities in the Secure Sockets Layer (SSL) or TLS protocol itself.

The only way to avoid some TLS vulnerabilities is to ensure your web server uses only the latest version of TLS. Some organizations want to limit their TLS configuration to the highest possible security levels to satisfy company policies, regulatory requirements, or other information security requirements. In practice, such limitations usually mean using TLS version 1.2 (at the time of this writing, TLS 1.3 is in the works) and using only strong cryptographic ciphers. Note that forcing a high-security TLS connection in this manner limits which types of devices can connect to your web server. I address this point at the end of this post.

The default TLS configuration in most web servers is compatible with the broadest set of clients (such as web browsers, mobile devices, and point-of-sale systems). As a result, older ciphers and protocol versions are usually enabled. This is true for the Elastic Load Balancing load balancer that is created in your Elastic Beanstalk application as well as for web server software such as Apache and nginx.  For example, TLS versions 1.0 and 1.1 are enabled in addition to 1.2. The RC4 cipher is permitted, even though that cipher is too weak for the most demanding security requirements. If your application needs to prioritize the security of connections over compatibility with legacy devices, you must adjust the TLS encryption settings on your application. The solution in this post helps you make those adjustments.

Prerequisites for the solution

Before you implement this solution, you must have a few prerequisites in place:

  1. You must have a hosted zone in Amazon Route 53 where the name of the secure application will be created. I use example.com as my domain name in this post and assume that I host example.com publicly in Route 53. To learn more about creating and hosting a zone publicly in Route 53, see Working with Public Hosted Zones.
  2. You must choose a name to be associated with the secure app. In this case, I use secure.example.com as the DNS name to be associated with the secure app. This means that I’m trying to create an Elastic Beanstalk application whose URL will be https://secure.example.com/.
  3. You must have a TLS certificate hosted in AWS Certificate Manager (ACM). This certificate must be issued with the name you decided in Step 2. If you are new to ACM, see Getting Started. If you are already familiar with ACM, request a certificate and get its Amazon Resource Name (ARN).Look up the ARN for the certificate that you created by opening the ACM console. The ARN looks something like: arn:aws:acm:eu-west-1:111122223333:certificate/12345678-abcd-1234-abcd-1234abcd1234.

Implementing the solution

You can use two approaches to control the TLS ciphers used by your load balancer: one is to use a predefined protocol policy from AWS, and the other is to write your own protocol policy that lists exactly which ciphers should be enabled. There are many ciphers and options that can be set, so the appropriate AWS predefined policy is often the simplest policy to use. If you have to comply with an information security policy that requires enabling or disabling specific ciphers, you will probably find it easiest to write a custom policy listing only the ciphers that are acceptable to your requirements.

AWS released two predefined TLS policies on March 10, 2017: ELBSecurityPolicy-TLS-1-1-2017-01 and ELBSecurityPolicy-TLS-1-2-2017-01. These policies restrict TLS negotiations to TLS 1.1 and 1.2, respectively. You can find a good comparison of the ciphers that these policies enable and disable in the HTTPS listener documentation for Elastic Load Balancing. If your requirements are simply “support TLS 1.1 and later” or “support TLS 1.2 and later,” those AWS predefined cipher policies are the best place to start. If you need to control your cipher choice with a custom policy, I show you in this post which lines of the CloudFormation template to change.

Download the predefined policy CloudFormation template

Many AWS customers rely on CloudFormation to launch their AWS resources, including their Elastic Beanstalk applications. To change the ciphers and protocol versions supported on your load balancer, you must put those options in a CloudFormation template. You can store your site’s TLS certificate in ACM and create the corresponding DNS alias record in the correct zone in Route 53.

To start, download the CloudFormation template that I have provided for this blog post, or deploy the template directly in your environment. This template creates a CloudFormation stack in your default VPC that contains two resources: an Elastic Beanstalk application that deploys a standard sample PHP application, and a Route 53 record in a hosted zone. This CloudFormation template selects the AWS predefined policy called ELBSecurityPolicy-TLS-1-2-2017-01 and deploys it.

Launching the sample application from the CloudFormation console

In the CloudFormation console, choose Create Stack. You can either upload the template through your browser, or load the template into an Amazon S3 bucket and type the S3 URL in the Specify an Amazon S3 template URL box.

After you click Next, you will see that there are three parameters defined: CertificateARN, ELBHostName, and HostedDomainName. Set the CertificateARN parameter to the ARN of the certificate you want to use for your application. Set the ELBHostName parameter to the hostname part of the URL. For example, if your URL were https://secure.example.com/, the HostedDomainName parameter would be example.com and the ELBHostName parameter would be secure.

For the sample application, choose Next and then choose Create, and the CloudFormation stack will be created. For your own applications, you might need to set other options such as a database, VPC options, or Amazon SNS notifications. For more details, see AWS Elastic Beanstalk Environment Configuration. To deploy an application other than our sample PHP application, create your own application source bundle.

Launching the sample application from the command line

In addition to launching the sample application from the console, you can specify the parameters from the command line. Because the template uses parameters, you can launch multiple copies of the application, specifying different parameters for each copy. To launch the application from a Linux command line with the AWS CLI, insert the correct values for your application, as shown in the following command.

aws cloudformation create-stack --stack-name "SecureSampleApplication" \
--template-url https://<URL of your CloudFormation template in S3> \
--parameters ParameterKey=CertificateARN,ParameterValue=<Your ARN> \
ParameterKey=ELBHostName,ParameterValue=<Your Host Name> \
ParameterKey=HostedDomainName,ParameterValue=<Your Domain Name>

When that command exits, it prints the StackID of the stack it created. Save that StackID for later so that you can fetch the stack’s outputs from the command line.

Using a custom cipher specification

If you want to specify your own cipher choices, you can use the same CloudFormation template and change two lines. Let’s assume your information security policies require you to disable any ciphers that use Cipher Block Chaining (CBC) mode encryption. These ciphers are enabled in the ELBSecurityPolicy-TLS-1-2-2017-01 managed policy, so to satisfy that security requirement, you have to modify the CloudFormation template to use your own protocol policy.

In the template, locate the three lines that define the TLSHighPolicy.

- Namespace:  aws:elb:policies:TLSHighPolicy
OptionName: SSLReferencePolicy
Value:      ELBSecurityPolicy-TLS-1-2-2017-01

Change the OptionName and Value for the TLSHighPolicy. Instead of referring to the AWS predefined policy by name, explicitly list all the ciphers you want to use. Change those three lines so they look like the following.

- Namespace: aws:elb:policies:TLSHighPolicy
OptionName: SSLProtocols
Value:  Protocol-TLSv1.2,Server-Defined-Cipher-Order,ECDHE-ECDSA-AES256-GCM-SHA384,ECDHE-ECDSA-AES128-GCM-SHA256,ECDHE-RSA-AES256-GCM-SHA384,ECDHE-RSA-AES128-GCM-SHA256

This protocol policy stipulates that the load balancer should:

  • Negotiate connections using only TLS 1.2.
  • Ignore any attempts by the client (for example, the web browser or mobile device) to negotiate a weaker cipher.
  • Accept four specific, strong combinations of cipher and key exchange—and nothing else.

The protocol policy enables only TLS 1.2, strong ciphers that do not use CBC mode encryption, and strong key exchange.

Connect to the secure application

When your CloudFormation stack is in the CREATE_COMPLETED state, you will find three outputs:

  1. The public DNS name of the load balancer
  2. The secure URL that was created
  3. TestOnSSLLabs output that contains a direct link for testing your configuration

You can either enter the secure URL in a web browser (for example, https://secure.example.com/), or click the link in the Outputs to open your sample application and see the demo page. Note that you must use HTTPS—this template has disabled HTTP on port 80 and only listens with HTTPS on port 443.

If you launched your application through the command line, you can view the CloudFormation outputs using the command line as well. You need to know the StackId of the stack you launched and insert it in the following stack-name parameter.

aws cloudformation describe-stacks --stack-name "<ARN of Your Stack>" \
--query 'Stacks[0].Outputs'

Test your application over the Internet with SSLLabs

The easiest way to confirm that the load balancer is using the secure ciphers that we chose is to enter the URL of the load balancer in the form on SSL Labs’ SSL Server Test page. If you do not want the name of your load balancer to be shared publicly on SSLLabs.com, select the Do not show the results on the boards check box. After a minute or two of testing, SSLLabs gives you a detailed report of every cipher it tried and how your load balancer responded. This test simulates many devices that might connect to your website, including mobile phones, desktop web browsers, and software libraries such as Java and OpenSSL. The report tells you whether these clients would be able to connect to your application successfully.

Assuming all went well, you should receive an A grade for the sample application. The biggest contributors to the A grade are:

  • Supporting only TLS 1.2, and not TLS 1.1, TLS 1.0, or SSL 3.0
  • Supporting only strong ciphers such as AES, and not weaker ciphers such as RC4
  • Having an X.509 public key certificate issued correctly by ACM

How to test your application privately with sslscan

You might not be able to reach your Elastic Beanstalk application from the Internet because it might be in a private subnet that is only accessible internally. If you want to test the security of your load balancer’s configuration privately, you can use one of the open source command-line tools such as sslscan. You can install and run the sslscan command on any Amazon EC2 Linux instance or even from your own laptop. Be sure that the Elastic Beanstalk application you want to test will accept an HTTPS connection from your Amazon Linux EC2 instance or from your laptop.

The easiest way to get sslscan on an Amazon Linux EC2 instance is to:

  1. Enable the Extra Packages for Enterprise Linux (EPEL) repository.
  2. Run sudo yum install sslscan.
  3. After the command runs successfully, run sslscan secure.example.com to scan your application for supported ciphers.

The results are similar to Qualys’ results at SSLLabs.com, but the sslscan tool does not summarize and evaluate the results to assign a grade. It just reports whether your application accepted a connection using the cipher that it tried. You must decide for yourself whether that set of accepted connections represents the right level of security for your application. If you have been asked to build a secure load balancer that meets specific security requirements, the output from sslscan helps to show how the security of your application is configured.

The following sample output shows a small subset of the total output of the sslscan tool.

Accepted TLS12 256 bits AES256-GCM-SHA384
Accepted TLS12 256 bits AES256-SHA256
Accepted TLS12 256 bits AES256-SHA
Rejected TLS12 256 bits CAMELLIA256-SHA
Failed TLS12 256 bits PSK-AES256-CBC-SHA
Rejected TLS12 128 bits ECDHE-RSA-AES128-GCM-SHA256
Rejected TLS12 128 bits ECDHE-ECDSA-AES128-GCM-SHA256
Rejected TLS12 128 bits ECDHE-RSA-AES128-SHA256

An Accepted connection is one that was successful: the load balancer and the client were both able to use the indicated cipher. Failed and Rejected connections are connections whose load balancer would not accept the level of security that the client was requesting. As a result, the load balancer closed the connection instead of communicating insecurely. The difference between Failed and Rejected is based one whether the TLS connection was closed cleanly.

Comparing the two policies

The main difference between our custom policy and the AWS predefined policy is whether or not CBC ciphers are accepted. The test results with both policies are identical except for the results shown in the following table. The only change in the policy, and therefore the only change in the results, is that the cipher suites using CBC ciphers have been disabled.

Cipher Suite Name Encryption Algorithm Key Size (bits) ELBSecurityPolicy-TLS-1-2-2017-01 Custom Policy
ECDHE-RSA-AES256-GCM-SHA384 AESGCM 256 Enabled Enabled
ECDHE-RSA-AES256-SHA384 AES 256 Enabled Disabled
AES256-GCM-SHA384 AESGCM 256 Enabled Disabled
AES256-SHA256 AES 256 Enabled Disabled
ECDHE-RSA-AES128-GCM-SHA256 AESGCM 128 Enabled Enabled
ECDHE-RSA-AES128-SHA256 AES 128 Enabled Disabled
AES128-GCM-SHA256 AESGCM 128 Enabled Disabled
AES128-SHA256 AES 128 Enabled Disabled

Strong ciphers and compatibility

The custom policy described in the previous section prevents legacy devices and older versions of software and web browsers from connecting. The output at SSLLabs provides a list of devices and applications (such as Internet Explorer 10 on Windows 7) that cannot connect to an application that uses the TLS policy. By design, the load balancer will refuse to connect to a device that is unable to negotiate a connection at the required levels of security. Users who use legacy software and devices will see different errors, depending on which device or software they use (for example, Internet Explorer on Windows, Chrome on Android, or a legacy mobile application). The error messages will be some variation of “connection failed” because the Elastic Load Balancer closes the connection without responding to the user’s request. This behavior can be problematic for websites that must be accessible to older desktop operating systems or older mobile devices.

If you need to support legacy devices, adjust the TLSHighPolicy in the CloudFormation template. For example, if you need to support web browsers on Windows 7 systems (and you cannot enable TLS 1.2 support on those systems), you can change the policy to enable TLS 1.1. To do this, change the value of SSLReferencePolicy to ELBSecurityPolicy-TLS-1-1-2017-01.

Enabling legacy protocol versions such as TLS version 1.1 will allow older devices to connect, but then the application may not be compliant with the information security policies or business requirements that require strong ciphers.

Conclusion

Using Elastic Beanstalk, Route 53, and ACM can help you launch secure applications that are designed to not only protect data but also meet regulatory compliance requirements and your information security policies. The TLS policy, either custom or predefined, allows you to control exactly which cryptographic ciphers are enabled on your Elastic Load Balancer. The TLS test results provide you with clear evidence you can use to demonstrate compliance with security policies or requirements. The parameters in this post’s CloudFormation template also make it adaptable and reusable for multiple applications. You can use the same template to launch different applications on different secure URLs by simply changing the parameters that you pass to the template.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the CloudFormation forum.

– Paco

ExtraTorrent’s Distribution Groups ettv and EtHD Keep Going

Post Syndicated from Ernesto original https://torrentfreak.com/extratorrents-distribution-groups-ettv-and-ethd-keep-going-170519/

This week the torrent community entered a state of shock when another major torrent site closed its doors.

Having served torrents to the masses for over a decade, ExtraTorrent decided to throw in the towel, without providing any detail or an apparent motive.

ExtraTorrent operator SaM simply informed us that “it’s time we say goodbye.”

Now that a few days have passed the dust is slowly beginning to settle. Frequent ExtraTorrent users have started to flock to alternatives such as The Pirate Bay, Torrentz2 and RARBG, which have all noticed a clear uptick in users.

What has also become clear is that ExtraTorrent won’t have quit without leaving its mark. The site was home to several prominent uploaders and groups, and some feared that these would go down with the site. However, it looks like that won’t be the case for them all.

On Thursday, shortly after the site was closed, ExtraTorrent operator SaM said that the movie torrent distribution group ETRG would disappear, but that there was hope for others.

“Ettv and Ethd could remain operational if they get enough donations to sustain the expenses and if the people handling it [are] ready to keep going,” SaM said.

Indeed, both TV groups are keeping the ET spirit alive as dozens of fresh torrents have appeared over the past few days. While they’re no longer on ExtraTorrent, the accounts on The Pirate Bay remain very active, as can be seen below.

ettv’s recent releases

Another well-known uploader, DDR, will continue to release torrents as well. TorrentFreak was informed that the uploader will use the ‘SaM’ accounts at The Pirate Bay and 1337x to continue his work.

And ExtraTorrent’s name lives on elsewhere too. The image hosting site Extraimage, which was regularly used by torrent uploaders to feature samples, is still up and running as well.

There is another major casualty of the ExtraTorrent closure though. TorrentFreak is informed that ET’s inhouse encoder FUM, known for regular high-quality TV releases, will stop.

Over the weeks we will see what the real impact of the surprise shutdown will be. A community was destroyed this week, and many uploaders lost their home, but as we’ve seen with KickassTorrents, Torrentz, and other sites before them, the torrent ecosystem isn’t easily disrupted.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Crash Course Computer Science with Carrie Anne Philbin

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/crash-course-carrie-anne-philbin/

Get your teeth into the history of computer science with our Director of Education, Carrie Anne Philbin, and the team at YouTube’s incredible Crash Course channel.

Crash Course Computer Science Preview

Starting February 22nd, Carrie Anne Philbin will be hosting Crash Course Computer Science! In this series, we’re going to trace the origins of our modern computers, take a closer look at the ideas that gave us our current hardware and software, discuss how and why our smart devices just keep getting smarter, and even look towards the future!

The brainchild of Hank and John Green (the latter of whom is responsible for books such as The Fault in Our Stars and all of my resultant heartbroken tears), Crash Course is an educational YouTube channel specialising in courses for school-age tuition support.

As part of the YouTube Orginal Channel Initiative, and with their partners PBS Digital Studios, the team has completed courses in subjects such as physics, hosted by Dr. Shini Somara, astronomy with Phil Plait, and sociology with Nicole Sweeney.

Raspberry Pi Carrie Anne Philbin Crash Course

Oh, and they’ve recently released a new series on computer science with Carrie Anne Philbin , whom you may know as Raspberry Pi’s Director of Education and the host of YouTube’s Geek Gurl Diaries.

Computer Science with Carrie Anne

Covering topics such as RAM, Boolean logic, CPU design , and binary, the course is currently up to episode twelve of its run. Episodes are released every Tuesday, and there are lots more to come.

Crash Course Carrie Anne Philbin Raspberry Pi

Following the fast-paced, visual style of the Crash Course brand, Carrie Anne takes her viewers on a journey from early computing with Lovelace and Babbage through to the modern-day electronics that power our favourite gadgets such as tablets, mobile phones, and small single-board microcomputers…

The response so far

A few members of the Raspberry Pi team recently attended VidCon Europe in Amsterdam to learn more about making video content for our community – and also so I could exist in the same space as the Holy Trinity, albeit briefly.

At VidCon, Carrie Anne took part in an engaging and successful Women in Science panel with Sally Le Page, Viviane Lalande, Hana Shoib, Maddie Moate, and fellow Crash Course presenter Dr. Shini Somara. I could see that Crash Course Computer Science was going down well from the number of people who approached Carrie Anne to thank her for the course, from those who were learning for the first time to people who were rediscovering the subject.

Crash Course Carrie Anne Philbin Raspberry Pi

Take part in the conversation

Join in the conversation! Head over to YouTube, watch Crash Course Computer Science, and join the discussion in the comments.

Crash Course Carrie Anne Philbin Raspberry Pi

You can also follow Crash Course on Twitter for release updates, and subscribe on YouTube to get notifications of new content.

Oh, and who can spot the sneaky Raspberry Pi in the video introduction?

“Cheers!”

Crash Course Computer Science Outtakes

In which Carrie Anne presents a new sing-a-long format and faces her greatest challenge yet – signing off an episode. Want to find Crash Course elsewhere on the internet? Facebook – http://www.facebook.com/YouTubeCrashCourse Twitter – http://www.twitter.com/TheCrashCourse Tumblr – http://thecrashcourse.tumblr.com Support Crash Course on Patreon: http://patreon.com/crashcourse CC Kids: http://www.youtube.com/crashcoursekids Produced in collaboration with PBS Digital Studios: http://youtube.com/pbsdigitalstudios The Latest from PBS Digital Studios: https://www.youtube.com/playlist?list=PL1mtdjDVOoOqJzeaJAV15Tq0tZ1vKj7ZV We’ve got merch!

The post Crash Course Computer Science with Carrie Anne Philbin appeared first on Raspberry Pi.

EC2 In-Memory Processing Update: Instances with 4 to 16 TB of Memory + Scale-Out SAP HANA to 34 TB

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-in-memory-processing-update-instances-with-4-to-16-tb-of-memory-scale-out-sap-hana-to-34-tb/

Several times each month, I speak to AWS customers at our Executive Briefing Center in Seattle. I describe our innovation process and talk about how the roadmap for each AWS offering is driven by customer requests and feedback.

A good example of this is our work to make AWS a great home for SAP’s portfolio of business solutions. Over the years our customers have told us that they run large-scale SAP applications in production on AWS and we’ve worked hard to provide them with EC2 instances that are designed to accommodate their workloads. Because SAP installations are unfailingly mission-critical, SAP certifies their products for use on certain EC2 instance types and sizes. We work directly with SAP in order to achieve certification and to make AWS a robust & reliable host for their products.

Here’s a quick recap of some of our most important announcements in this area:

June 2012 – We expanded the range of SAP-certified solutions that are available on AWS.

October 2012 – We announced that the SAP HANA in-memory database is now available for production use on AWS.

March 2014 – We announced that SAP HANA can now run in production form on cr1.8xlarge instances with up to 244 GB of memory, with the ability to create test clusters that are even larger.

June 2014 – We published a SAP HANA Deployment Guide and a set of AWS CloudFormation templates in conjunction with SAP certification on r3.8xlarge instances.

October 2015 – We announced the x1.32xlarge instances with 2 TB of memory, designed to run SAP HANA, Microsoft SQL Server, Apache Spark, and Presto.

August 2016 – We announced that clusters of X1 instances can now be used to create production SAP HANA clusters with up to 7 nodes, or 14 TB of memory.

October 2016 – We announced the x1.16xlarge instance with 1 TB of memory.

January 2017 – SAP HANA was certified for use on r4.16xlarge instances.

Today, customers from a broad collection of industries run their SAP applications in production form on AWS (the SAP and Amazon Web Services page has a long list of customer success stories).

My colleague Bas Kamphuis recently wrote about Navigating the Digital Journey with SAP and the Cloud (registration required). He discusses the role of SAP in digital transformation and examines the key characteristics of the cloud infrastructure that support it, while pointing out many of the advantages that the cloud offers in comparison to other hosting options. Here’s how he illustrates these advantages in his article:

We continue to work to make AWS an even better place to run SAP applications in production form. Here are some of the things that we are working on:

  • Bigger SAP HANA Clusters – You can now build scale-out SAP HANA clusters with up to 17 nodes (34 TB of memory).
  • 4 TB Instances – The upcoming x1e.32xlarge instances will offer 4 TB of memory.
  • 8 – 16 TB Instances – Instances with up to 16 TB of memory are in the works.

Let’s dive in!

Building Bigger SAP HANA Clusters
I’m happy to announce that we have been working with SAP to certify the x1.32large instances for use in scale-out clusters with up to 17 nodes (34 TB of memory). This is the largest scale-out deployment available from any cloud provider today, and allows our customers to deploy very large SAP workloads on AWS (visit the SAP HANA Hardware directory certification for the x1.32xlarge instance to learn more). To learn how to architect and deploy your own scale-out cluster, consult the SAP HANA on AWS Quick Start.

Extending the Memory-Intensive X1 Family
We will continue to invest in this and other instance families in order to address your needs and to give you a solid growth path.

Later this year we plan to make the x1e.32xlarge instances available in several AWS regions, in both On-Demand and Reserved Instance form. These instances will offer 4 TB of DDR4 memory (twice as much as the x1.32xlarge), 128 vCPUs (four 2.3 GHz Intel® Xeon® E7 8880 v3 processors), high memory bandwidth, and large L3 caches. The instances will be VPC-only, and will deliver up to 20 Gbps of network banwidth using the Elastic Network Adapter while minimizing latency and jitter. They’ll be EBS-optimized by default, with up to 14 Gbps of dedicated EBS throughput.

Here are some screen shots from the shell. First, dmesg shows the boot-time kernel message:

Second, lscpu shows the vCPU & socket count, along with many other interesting facts:

And top shows nearly 900 processes:

Here’s the view from within HANA Studio:

This new instance, along with the certification for larger clusters, broadens the set of scale-out and scale-up options that you have for running SAP on EC2, as you can see from this diagram:

The Long-Term Memory-Intensive Roadmap
Because we know that planning large-scale SAP installations can take a considerable amount of time, I would also like to share part of our roadmap with you.

Today, customers are able to run larger SAP HANA certified servers in third party colo data centers and connect them to their AWS infrastructure via AWS Direct Connect, but customers have told us that they really want a cloud native solution like they currently get with X1 instances.

In order to meet this need, we are working on instances with even more memory! Throughout 2017 and 2018, we plan to launch EC2 instances with between 8 TB and 16 TB of memory. These upcoming instances, along with the x1e.32xlarge, will allow you to create larger single-node SAP installations and multi-node SAP HANA clusters, and to run other memory-intensive applications and services. It will also provide you with some scale-up headroom that will become helpful when you start to reach the limits of the smaller instances.

I’ll share more information on our plans as soon as possible.

Say Hello at SAPPHIRE
The AWS team will be in booth 539 at SAPPHIRE with a rolling set of sessions from our team, our customers, and our partners in the in-booth theater. We’ll also be participating in many sessions throughout the event. Here’s a sampling (see SAP SAPPHIRE NOW 2017 for a full list):

SAP Solutions on AWS for Big Businesses and Big Workloads – Wednesday, May 17th at Noon. Bas Kamphuis (General Manager, SAP, AWS) & Ed Alford (VP of Business Application Services, BP).

Break Through the Speed Barrier When You Move to SAP HANA on AWS – Wednesday, May 17th at 12:30 PM – Paul Young (VP, SAP) and Saul Dave (Senior Director, Enterprise Systems, Zappos).

AWS Fireside Chat with Zappos (Rapid SAP HANA Migration: Real Results) – Thursday, May 18th at 11:00 AM – Saul Dave (Senior Director, Enterprise Systems, Zappos) and Steve Jones (Senior Manager, SAP Solutions Architecture, AWS).

Jeff;

PS – If you have some SAP experience and would like to bring it to the cloud, take a look at the Principal Product Manager (AWS Quick Starts) and SAP Architect positions.