Tag Archives: ip address

Police Confirm Arrests of BlackCats-Games Operators

Post Syndicated from Andy original https://torrentfreak.com/police-confirm-arrests-blackcats-games-operators-161020/

After being down for several hours, yesterday the domain of private tracker BlackCats-Games was seized by the UK’s Police Intellectual Property Crime Unit.

The domain used to point to an IP address in Canada, but was later switched to a server known to be under the control of PIPCU, the UK’s leading anti-piracy force.

Following several hours of rumors, last evening sources close to the site began to confirm that the situation was serious. Reddit user Farow went public with specific details, noting that the owner of BlackCats-Games had been arrested and the site would be closing down.

Former site staff member SteWieH added that there had in fact been two arrests and it was the site’s sysops that had been taken into custody.

While both are credible sources, there was no formal confirmation from PIPCU. That came a few moments ago and it’s pretty bad news for fans of the site and its operators.

“Officers from the City of London Police Intellectual Property Crime Unit (PIPCU) have arrested two men in connection with an ongoing investigation into the illegal distribution of copyright protected video games,” the unit said in a statement.

Police say that the raids took place on Tuesday, with officers arresting two men aged 47 and 44 years at their homes in Birmingham, West Midlands and Blyth, Northumberland. Both were arrested on suspicion of copyright infringement and money laundering offenses.

Detectives say they also seized digital media and computer hardware.

PIPCU report that the investigation into the site was launched in cooperation with UK Interactive Entertainment (UKIE) and the Entertainment Software Association (ESA). Former staff member SteWieH says that a PayPal account operated by the site in 2013 appears to have played an important role in the arrests.

Detective Sergeant Gary Brownfrom the City of London Police Intellectual Property Unit said that their goal was to disrupt the work of “content thieves.”

“With the ever-growing consumer appetite for gaming driving the threat of piracy to the industry, our action today is essential in disrupting criminal activity and the money which drives it,” Brownfrom said.

“Those who steal copyrighted content exploit the highly skilled work and jobs supported by the gaming industry. We are working hard to tackle digital intellectual property crime and we will continue to target our enforcement activity towards those identified as content thieves whatever scale they are operating at.”

UK Interactive Entertainment welcomed the arrests.

“UKIE applauds the action taken by PIPCU against the operators of the site. Sites like this are harmful to the hard work of game creators around the world. PIPCU’s actions confirm that these sites will not be tolerated, and are subject to criminal enforcement,” a spokesman said.

Stanley Pierre-Louis, general counsel for the Entertainment Software Association, thanked PIPCU for its work.

“ESA commends PIPCU for its commitment to taking action against sites that facilitate the illegal copying and distribution of incredibly advanced works of digital art. We are grateful for PIPCU’s leadership in this area and their support of creative industries.”

Both men have been released on police bail.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Cliché: Security through obscurity (again)

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/cliche-security-through-obscurity-again.html

This post keeps popping up in my timeline. It’s wrong. The phrase “security through/by security” has become such a cliché that it’s lost all meaning. When somebody says it, they are almost certainly saying a dumb thing, regardless if they support it or are trying to debunk it.

Let’s go back to first principles, namely Kerckhoff’s Principle from the 1800s that states cryptography should be secure even if everything is known about it except the key. In other words, there exists no double-secret military-grade encryption with secret algorithms. Today’s military crypto is public crypto.

Let’s apply this to port knocking. This is not a layer of obscurity, as proposed by the above post, but a layer of security. Applying Kerkhoff’s Principle, it should work even if everything is known about the port knocking algorithm except the sequence of ports being knocked.

Kerkhoff’s Principle is based on a few simple observations. Two relevant ones today are:

* things are not nearly as obscure as you think
* obscurity often impacts your friends more than your enemies

I (as an attacker) know that many sites use port knocking. Therefore, if I get no response from an IP address (which I have reason to know exists), then I’ll assume port knocking is hiding it. I know which port knocking techniques are popular. Or, sniffing at the local Starbucks, I might observe outgoing port knocking behavior, and know which sensitive systems I can attack later using the technique. Thus, though port knocking makes it look like a system doesn’t exist, this doesn’t fully hide a system from me. The security of the system should not rest on this obscurity.

Instead of an obscurity layer, port knocking a security layer. The security it provides is that it drives up the amount of effort an attacker needs to hack the system. Some use the opposite approach, whereby the firewall in front of a subnet responds with a SYN-ACK to every SYN. This likewise increases the costs of those doing port scans (like myself, who masscans the entire Internet), by making it look that all IP addresses and ports exist, not by hiding systems behind a layer of obscurity.

One plausible way of defeating a port knocking implementation is to simply scan all 64k ports many times. If you are looking for a sequence of TCP ports 1000, 5000, 2000, 4000, then you’ll see this sequence. You’ll see all sequences.

If the code for your implementation is open, then it’s easy for others to see this plausible flaw and point it out to you. You could fix this flaw by then forcing the sequence to reset every time it saw the first port, or to also listen for bad ports (ones not part of the sequence) that would likewise reset the sequence.

If your code is closed, then your friends can’t see this problem. But your enemies are still highly motivated. They might find your code, find the compiled implementation, or must just guess ways around your possible implementation. The chances that you, some random defender, is better at this than the combined effort of all your attackers is very small. Opening things up to your friends gives you a greater edge to combat your enemies.

Thus, applying Kerkoff’s Principle to this problem is that you shouldn’t rely upon the secrecy of your port knocking algorithm, or the fact that you are using port knocking in the first place.

The above post also discusses ssh on alternate ports. It points out that if an 0day is found in ssh, those who run the service on the default port of 22 will get hacked first, while those who run at odd ports, like 7837, will have time to patch their services before getting owned.

But this is just repeating the fallacy. It’s focusing only on the increase in difficulty to attackers, but ignoring the increase in difficulties to friends. Let’s say some new ssh 0day is announced. Everybody is going to rush to patch their servers. They are going to run tools like my masscan to quickly find everything listening on port 22, or a vuln scanner like Nessus. Everything on port 22 will quickly get patched. SSH servers running on port 7837, however, will not get patched. On the other other hand, Internet-wide scans like Shodan or the 2012 Internet Census may have already found that you are running ssh on port 7837. That means the attackers can quickly attack it with the latest 0day even while you, the defender, are slow to patch it.

Running ssh on alternate ports is certainly useful because, as the article points out, it dramatically cuts down on the noise that defenders have to deal with. If somebody is brute forcing passwords on port 7837, then that’s a threat worth paying more attention to than somebody doing the same at port 22. But this benefit is separate discussion from obscurity. Hiding an ssh server on an obscure port may thus be a good idea, but not because there is value to obscurity.

Thus, both port knocking and putting ssh on alternate ports are valid security strategies. However, once you mention the cliche “security by/through obscurity”, you add nothing useful to the mix.

Blackcat Games Domain Seized by UK Anti-Piracy Police

Post Syndicated from Andy original https://torrentfreak.com/blackcat-games-domain-seized-by-uk-anti-piracy-police-161019/

blackcats-1For the past several years, the UK’s Police Intellectual Property Crime Unit (PIPCU) has been contacting torrent, streaming, and file-hosting sites in an effort to close them down.

In the main, PIPCU has relied on its position as a government agency to add weight to its threats that one or way or another, sites will either be shut down or have their operations hampered.

Many sites located overseas didn’t take the threats particularly seriously but on several occasions, PIPCU has shown that it doesn’t need to leave the UK to make an impact. That appears to be the case today with private tracker Blackcats-Games.

With around 30K members, the long-established private tracker has been a major player in the gaming torrents scene for many years but earlier today TorrentFreak received a tip that the site may have attracted the attention of the authorities.

With the site down no further news became available, but in the past few hours, fresh signs suggest that the site is indeed in some kind of legal trouble.

Results currently vary depending on ISP and region, but most visitors to the site’s Blackcats-Games.net domain are now greeted with the familiar banner that PIPCU places on sites when they’re under investigation.


TorrentFreak has confirmed that the police images appearing on the site’s main page are not stored on the front-facing server BlackCats-Games operated in Canada (OVH,, but are actually being served from an IP address known to be under the control of the Police Intellectual Property Crime Unit.

The same server also provides the images for previously-seized domains including filecrop.com, mp3juices.com, immunicity.org, nutjob.eu, deejayportal.co.uk and oldskoolscouse.co.uk.


Of course, being greeted by these PIPCU images leads many users to the conclusion that the site may have been raided and/or its operators arrested. While that is yet to be confirmed by the authorities or sources close to the site, there is also a less dramatic option.

PIPCU is known to approach registrars with requests for them to suspend domains. The police argue that since they have determined that a particular site is acting illegally, registrars should comply with their requests.

While some like Canada-based EasyDNS have not caved in to the demands, others have. This has resulted in domains quickly being taken out of the control of site operators without any due process. It’s certainly possible that this could’ve happened to Blackcats-Games.net.

Furthermore, a separate micro-site (nefarious-gamer.com) on BlackCats’ server in Canada is still serving a short message, an indication that the server hasn’t been completely seized. However, there are probably other servers elsewhere, so only time will tell how they have been affected.

Until official word is received from one side or the other, the site’s users will continue to presume the worst. In 2015, PIPCU deprioritized domain suspensions so more could be at play here.

Update: A source close to the site has informed TF that there has been an arrest but was unable to confirm who was detained.

Update2: A Reddit moderator says that the owner of Blackcats-Games has been raided and arrested, with equipment seized.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How to Help Achieve Mobile App Transport Security (ATS) Compliance by Using Amazon CloudFront and AWS Certificate Manager

Post Syndicated from Lee Atkinson original https://aws.amazon.com/blogs/security/how-to-help-achieve-mobile-app-transport-security-compliance-by-using-amazon-cloudfront-and-aws-certificate-manager/

Web and application users and organizations have expressed a growing desire to conduct most of their HTTP communication securely by using HTTPS. At its 2016 Worldwide Developers Conference, Apple announced that starting in January 2017, apps submitted to its App Store will be required to support App Transport Security (ATS). ATS requires all connections to web services to use HTTPS and TLS version 1.2. In addition, Google has announced that starting in January 2017, new versions of its Chrome web browser will mark HTTP websites as being “not secure.”

In this post, I show how you can generate Secure Sockets Layer (SSL) or Transport Layer Security (TLS) certificates by using AWS Certificate Manager (ACM), apply the certificates to your Amazon CloudFront distributions, and deliver your websites and APIs over HTTPS.


Hypertext Transfer Protocol (HTTP) was proposed originally without the need for security measures such as server authentication and transport encryption. As HTTP evolved from covering simple document retrieval to sophisticated web applications and APIs, security concerns emerged. For example, if someone were able to spoof a website’s DNS name (perhaps by altering the DNS resolver’s configuration), they could direct users to another web server. Users would be unaware of this because the URL displayed by the browser would appear just as the user expected. If someone were able to gain access to network traffic between a client and server, that individual could eavesdrop on HTTP communication and either read or modify the content, without the client or server being aware of such malicious activities.

Hypertext Transfer Protocol Secure (HTTPS) was introduced as a secure version of HTTP. It uses either SSL or TLS protocols to create a secure channel through which HTTP communication can be transported. Using SSL/TLS, servers can be authenticated by using digital certificates. These certificates can be digitally signed by one of the certificate authorities (CA) trusted by the web client. Certificates can mitigate website spoofing and these can be later revoked by the CA, providing additional security. These revoked certificates are published by the authority on a certificate revocation list, or their status is made available via an online certificate status protocol (OCSP) responder. The SSL/TLS “handshake” that initiates the secure channel exchanges encryption keys in order to encrypt the data sent over it.

To avoid warnings from client applications regarding untrusted certificates, a CA that is trusted by the application must sign the certificates. The process of obtaining a certificate from a CA begins with generating a key pair and a certificate signing request. The certificate authority uses various methods in order to verify that the certificate requester is the owner of the domain for which the certificate is requested. Many authorities charge for verification and generation of the certificate.

Use ACM and CloudFront to deliver HTTPS websites and APIs

The process of requesting and paying for certificates, storing and transporting them securely, and repeating the process at renewal time can be a burden for website owners. ACM enables you to easily provision, manage, and deploy SSL/TLS certificates for use with AWS services, including CloudFront. ACM removes the time-consuming manual process of purchasing, uploading, and renewing certificates. With ACM, you can quickly request a certificate, deploy it on your CloudFront distributions, and let ACM handle certificate renewals. In addition to requesting SSL/TLS certificates provided by ACM, you can import certificates that you obtained outside of AWS.

CloudFront is a global content delivery network (CDN) service that accelerates the delivery of your websites, APIs, video content, and other web assets. CloudFront’s proportion of traffic delivered via HTTPS continues to increase as more customers use the secure protocol to deliver their websites and APIs.

CloudFront supports Apple’s ATS requirements for TLS 1.2, Perfect Forward Secrecy, server certificates with 2048-bit Rivest-Shamir-Adleman (RSA) keys, and a choice of ciphers. See more details in Supported Protocols and Ciphers.

The following diagram illustrates an architecture with ACM, a CloudFront distribution and its origins, and how they integrate to provide HTTPS access to end users and applications.

Solution architecture diagram

  1. ACM automates the creation and renewal of SSL/TLS certificates and deploys them to AWS resources such as CloudFront distributions and Elastic Load Balancing load balancers at your instruction.
  2. Users communicate with CloudFront over HTTPS. CloudFront terminates the SSL/TLS connection at the edge location.
  3. You can configure CloudFront to communicate to the origin over HTTP or HTTPS.

CloudFront enables easy HTTPS adoption. It provides a default *.cloudfront.net wildcard certificate and supports custom certificates, which can be either created by a third-party CA, or created and managed by ACM. ACM automates the process of generating and associating certificates with your CloudFront distribution for the first time and on each renewal. CloudFront supports the Server Name Indication (SNI) TLS extension (enabling efficient use of IP addresses when hosting multiple HTTPS websites) and dedicated-IP SSL/TLS (for older browsers and legacy clients that do no support SNI).

Keeping that background information in mind, I will now show you how you can generate a certificate with ACM and associate it with your CloudFront distribution.

Generate a certificate with ACM and associate it with your CloudFront distribution

In order to help deliver websites and APIs that are compliant with Apple’s ATS requirements, you can generate a certificate in ACM and associate it with your CloudFront distribution.

To generate a certificate with ACM and associate it with your CloudFront distribution:

  1. Go to the ACM console and click Get started.
    ACM "Get started" page
  2. On the next page, type the website’s domain name for your certificate. If applicable, you can enter multiple domains here so that the same certificate can be used for multiple websites. In my case, I type *.leeatk.com to create what is known as a wildcard certificate that can be used for any domain ending in .leeatk.com (that is a domain I own). Click Review and request.
    Request a certificate page
  3. Click Confirm and request. You must now validate that you own the domain. ACM sends an email with a verification link to the domain registrant, technical contact, and administrative contact registered in the Whois record for the domain. ACM also sends the verification link to email addresses commonly associated with an administrator of a domain: administrator, hostmaster, postmaster, and webmaster. ACM sends the same verification email to all these addresses in the expectation that at least one address is monitored by the domain owner. The link in any of the emails can be used to verify the domain.
    List of email addresses to which the email with verification link will be sent
  4. Until the certificate has been validated, the status of the certificate remains Pending validation. When I went through this approval process for *.leeatk.com, I received the verification email shown in the following screenshot. When you receive the verification email, click the link in the email to approve the request.
    Example verification email
  5. After you click I Approve on the landing page, you will then see a page that confirms that you have approved an SSL/TLS certificate for your domain name.
    SSL/TLS certificate confirmation page
  6. Return to the ACM console, and the certificate’s status should become Issued. You may need to refresh the webpage.
    ACM console showing the certificate has been issued
  7. Now that you have created your certificate, go to the CloudFront console and select the distribution with which you want to associate the certificate.
    Screenshot of associating the CloudFront distribution with which to associate the certificate
  8. Click Edit. Scroll down to SSL Certificate and select Custom SSL certificate. From the drop-down list, select the certificate provided by ACM. Select Only Clients that Support Server Name Indication (SNI). You could select All Clients if you want to support older clients that do not support SNI.
    Screenshot of choosing a custom SSL certificate
  9. Save the configuration by clicking Yes, Edit at the bottom of the page.
  10. Now, when you view the website in a browser (Firefox is shown in the following screenshot), you see a green padlock in the address bar, confirming that this page is secured with a certificate trusted by the browser.
    Screenshot showing green padlock in address bar

Configure CloudFront to redirect HTTP requests to HTTPS

We encourage you to use HTTPS to help make your websites and APIs more secure. Therefore, we recommend that you configure CloudFront to redirect HTTP requests to HTTPS.

To configure CloudFront to redirect HTTP requests to HTTPS:

  1. Go to the CloudFront console, select the distribution again, and then click Cache Behavior.
    Screenshot showing Cache Behavior button
  2. In my case, I only have one behavior in my distribution. (If I had more behaviors, I would repeat the process for each behavior that I wanted to have HTTP-to-HTTPS redirection) and click Edit.
  3. Next to Viewer Protocol Policy, choose Redirect HTTP to HTTPS, and click Yes, Edit at the bottom of the page.
    Screenshot of choosing Redirect HTTP to HTTPS

I could also consider employing an HTTP Strict Transport Security (HSTS) policy on my website. In this case, I would add a Strict-Transport-Security response header at my origin to instruct browsers and other applications to make only HTTPS requests to my website for a period of time specified in the header’s value. This ensures that if a user submits a URL to my website specifying only HTTP, the browser will make an HTTPS request anyway. This is also useful for websites that link to my website using HTTP URLs.


CloudFront and ACM enable more secure communication between your users and your websites. CloudFront allows you to adopt HTTPS for your websites and APIs. ACM provides a simple way to request, manage, and renew your SSL/TLS certificates, and deploy those to AWS services such as CloudFront. Mobile application developers and API providers can more easily meet Apple’s ATS requirements now using CloudFront, in time for the January 2017 deadline.

If you have comments about this post, submit them in the “Comments” section below. If you have implementation questions, please start a new thread on the CloudFront forum.

– Lee

Boxing Promoter Offers Cash Reward to Identify Pirate Streamer

Post Syndicated from Andy original https://torrentfreak.com/boxing-promoter-offers-cash-reward-to-identify-pirate-streamer-161015/

streamingkeyEvery day, content production companies and their anti-piracy partners take a keen interest in people posting their material online without permission.

They’re often able to use technical means to identify infringers, often relying on IP address, financial, and similar information. However, some also resort to chasing pirates in the physical realm.

This is the approach currently being taken by Duco Events, who are said to be recognized by the World Boxing Organization as the leading promoter in the Asia Pacific region. Duco partners with companies including ESPN, Fox Sports, MAIN EVENT, SKY Sports and SKY Arena, and it is tired of having its content pirated.

One of the biggest thorns in its side is New Zealander James Bryant. Earlier this year he informed NZ Herald that he intended to stream a Duco boxing event taking place in July. That led to a private investigator being sent to his parents’ Auckland house to serve court papers. He wasn’t there.

Bryant, who claims to be a web developer and SEO specialist, says that on a separate occasion another person emailed him looking for a computer repair. Suspicious, he gave a friend’s address, which led to an investigator sitting outside there all day. He eventually asked for Bryant by name.

“They’ve called me twice, and they told me that it’s getting serious now, that it was too big to go away,” Bryant said.

That was back in the summer and it appears that as promised, Duco haven’t forgotten about Bryant. However, they still haven’t managed to locate him.

“I have been on holiday for the last few months and they are not doing a very good job at finding me,” Bryant said last week.

“It doesn’t bother me one bit … as soon as they find me, I will make it my personal mission to stream every event.”

Bryant’s defiance was not well received, with Duco chief executive Martin Snedden rejected claims that chasing streamers is counter-productive.

“In our view it is out-and-out theft, and people are starting to get the message that the risk isn’t worth getting involved. We know we can’t eradicate this, but we’re getting better at running interference,” Snedden said.

Now it appears that Duco are turning up the heat. In a posting this week to the company’s Facebook page, the boxing promotions outfit sought assistance in finding the elusive Bryant.


But if Duco thought that this would prompt Bryant to give himself up, they were very wrong. Instead, the self-confessed streamer has started a fund-raiser with two aims. First, to raise money to fight Duco, and second, to set up a new streaming service.

“My mission is to raise money for the upcoming battle and also to raise funds which will be put into developing a dedicated website which will be hosted on an overseas server which will broadcast live events as they happen,” Bryant explains.

“I am currently setting up a site which will provide live streams of legal events such as music, sport and festivals. It will be hosted off shore in any event that the courts do not allow me access to a computer, I plan on hosting a wide range of different events.

“I believe that as New Zealanders we shouldn’t have to feed the pockets of the corporations to watch sports we care about. It’s time to stand up New Zealand!” he concludes.

Seconds away….round two.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

RIAA: CloudFlare Shields Pirates and Frustrates Blocking Efforts

Post Syndicated from Ernesto original https://torrentfreak.com/riaa-cloudflare-shields-pirates-and-frustrates-blocking-efforts-161013/

cassetteFollowing in the footsteps of the MPAA, the RIAA has submitted its overview of “notorious markets” to the Office of the US Trade Representative (USTR).

These annual submissions help to guide the U.S. Government’s position toward foreign countries when it comes to copyright enforcement.

This year the RIAA’s report includes 47 alleged pirate sites in various categories. As in previous years, popular torrent sites such as The Pirate Bay and ExtraTorrent are prominently mentioned.

There’s also a strong focus on so-called “stream-ripping” sites. While these have been around for roughly a decade, the music industry sees them as a growing threat, which is also evidenced by the recent lawsuit against YouTube-MP3.

According to the music group, it is getting harder to target these sites, as they are increasingly taking precautions.

“It is exceedingly difficult to track, enforce against, and accurately associate various notorious websites,” RIAA writes, listing domain hopping, reverse proxy services and anonymous domain name registrations as the main factors.

Obstructing factors


The Pirate Bay is one of the prime examples of a site that has switched domain names in the past. Due to various enforcement efforts it burnt through more than a dozen domains with ease.

In addition, TPB and other pirate sites are increasingly using the popular CDN CloudFlare. Besides saving costs, it also acts as a reverse proxy and shields the true hosting location from public view.

This hasn’t gone unnoticed by the RIAA which repeatedly mentions CloudFlare in its report.

“BitTorrent sites, like many other pirate sites, are increasing (sic) turning to Cloudflare because routing their site through Cloudflare obfuscates the IP address of the actual hosting provider, masking the location of the site,” the RIAA writes.

Throughout the report the RIAA attempts to point out the hosting location of all pirate sites, but it often has to put down “obfuscated by Cloudflare” instead.

Obstructing factors


Aside from making it harder to identify the hosting location, CloudFlare can also make it harder for ISPs to block websites.

Traditionally, some ISPs have blocked pirate sites by IP-address, but this is no longer an option since CloudFlare customers share IPs with other sites, which can lead to overblocking.

“The use of Cloudflare’s services can also act to frustrate site-blocking orders because multiple non-infringing sites may share a Cloudflare IP address with the infringing site,” the RIAA notes in its report.

While CloudFlare itself isn’t tagged as a notorious site, the fact that both the RIAA and MPAA are highlighting the service in their report is not without reason. The industry groups are likely to demand a more proactive anti-piracy policy from CloudFlare in the future.

Apart from all the doom and gloom, there is also a positive development. After being labeled as a notorious pirate site for years, the RIAA has taken social network VK.com off its list. This is the direct result of licensing agreements between the site and various major labels.

“Russia’s vKontakte has now reached licensing agreements with major record companies and has thus been removed from our list,” the RIAA writes.

Finally, it’s worth noting that MP3Skull is no longer on the list. As we suggested yesterday, the RIAA believes that the people behind the site switched their operation to Emp3world.ch. Curiously, this knowledge didn’t prevent them from seizing the domain name of a seemingly unrelated site.

The full list of RIAA’s “notorious” pirate sites can be found below, and the full report is available here (pdf).

Stream-Ripping Sites

– Youtube-mp3.org
– Mp3juices.cc
– Convert2mp3.net
– Aiomp3.com
– Clipconverter.cc
– Savefrom.net
– Youtube2mp3.cc
– Onlinevideoconverter.com

Search-and-Download Sites

– Emp3world.ch
– Audiocastle.biz
– Viperial2.com
– Im1music.info
– Albumkings.com
– Newalbumreleases.net

BitTorrent Indexing and Tracker Sites

– Thepiratebay.org
– Extratorrent.cc
– Bitsnoop.com
– Isohunt.to
– Torrentdownloads.me
– LimeTorrents.cc
– Rarbg.to
– 1337x.to


– 4shared.com
– Uploaded.net
– Zippyshare.com
– Rapidgator.net
– Dopefile.pk
– Chomikuj.pl
– Turbobit.net
– Hitfile.net
– 1fichier.com
– Bigfile.to
– Share-online.biz
– Ulozto.cz

Unlicensed Pay-for-Download Sites

– Mp3va.com
– Soundsbox.com
– Iomoio.com
– Soundike.com
– Payplay.fm
– Mp3million.com
– Megaboon.com
– Melodishop.com
– Melodysale.com
– Mp3caprice.com
– Ivave.com
– Mediasack.com
– Goldenmp3.ru

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

MPAA Reports Pirate Sites and Hosting Providers to U.S. Government

Post Syndicated from Ernesto original https://torrentfreak.com/mpaa-reports-pirate-sites-and-hosting-providers-to-u-s-government-161010/

mpaa-logoResponding to a request from the Office of the US Trade Representative (USTR), the MPAA has sent in its annual list of notorious markets.

In its latest submission the Hollywood group targets a wide variety of “rogue” sites and services which they claim are promoting the illegal distribution of movies and TV-shows, with declining incomes and lost jobs in the movie industry as a result.

“The criminals who profit from the most notorious markets throughout the world threaten the very heart of our industry and in so doing they threaten the livelihoods of the people who give it life,” the MPAA writes.

What’s new this year is that the MPAA calls out several hosting providers. These companies refuse to take pirate sites offline following complaints, even when the MPAA views them as blatantly violating the law.

“Hosting companies provide the essential infrastructure required to operate a website,” MPAA writes. “Given the central role of hosting providers in the online ecosystem, it is very concerning that many refuse to take action upon being notified.”

The Hollywood group specifically mentions Private Layer, Altushost and Netbrella, which are linked to various countries including the Netherlands, Panama, Sweden and Switzerland.

CDN provider CloudFlare is also named. As a US-based company it can’t be included in the list. However, MPAA explains that it is often used as an anonymization tool by sites and services that are mentioned in the report.

“An example of a CDN frequently exploited by notorious markets to avoid detection and enforcement is Cloudflare. CloudFlare is a CDN that also provides reverse proxy functionality. Reverse proxy functionality hides the real IP address of a web server.”

Stressing the importance of third-party services, the MPAA notes that domain name registrars can also be seen as possible “notorious markets.” As an example, the report mentions the Indian Public Domain Registry (PDR) which has repeatedly refused to take action against pirate sites.

At the heart of the MPAA’s report are as always the pirate sites themselves. This year they list 23 sites in separate categories, each with a suspected location, as defined by the movie industry group.

Torrent Sites

According to the MPAA, BitTorrent remains the most popular source of P2P piracy, despite the shutdowns of large sites such as KAT, Torrentz and YTS.

The Pirate Bay has traditionally been one of the main targets. Based on data from Alexa and SimilarWeb, the MPAA says that TPB has about 47 million unique visitors per month.

The MPAA writes that the site was hit by various enforcement actions in recent years. They also mistakenly suggest that the site is no longer the number one pirate site, but add that it gained traction after KAT and Torrentz were taken down.

“While it has never returned to its number one position, it has had a significant comeback after kat.cr and torrentz.eu went offline in 2016,” the MPAA writes.

ExtraTorrent is another prime target. The site offers millions of torrents and is affiliated with the Trust.Zone VPN, which they advertise on their site.

“Extratorrent.cc claims astonishing piracy statistics: offering almost three million free files with sharing optimized through over 64 million seeders and more than 39 million leechers.

“The homepage currently displays a message warning users to use a VPN when downloading torrents. Extratorrent.cc is affiliated with Trust.Zone,” MPAA adds.

The full list of reported torrent sites is as follows:

-1337x.to (Switzerland)
-Extratorrent.cc (Latvia)
-Rarbg.to (Bosnia and Herzegovina)
-Rutracker.org (Russia)
-ThePirateBay.org (Unknown)

Direct Download and Streaming Cyberlockers

The second category of pirate sites reported by the MPAA are cyberlockers. The movie industry group points out that these sites generate millions of dollars in revenue, citing a report from Netnames.

The “Movshare Group,” which allegedly operates Nowvideo.sx, Movshare.net, Novamov.com, Videoweed.es, Nowdownload.ch, Divxstage.to and several other pirate sites is a particularly large threat, they say.

As in previous submissions VKontakte, Russia’s equivalent of Facebook, is also listed as a notorious market.

-Allmyvideos.net (Netherlands)
-Nowvideo.sx and the “Movshare Group” (several locations)
-Openload.co (Netherlands)
-Rapidgator.net (Russia)
-Uploaded.net (Netherlands/Switzerland)
-VK.com (Russia)

Linking Websites

Finally, there are various linking websites, many of which focus on a foreign audience. These sites don’t host the infringing material, but only link to it. The full list of linking sites is as follows.

123movies.to (Unknown)
-Filmesonlinegratis.net (Brazil/Portugal)
-Kinogo.club (Netherlands)
-Movie4k.to (Russia)
-Newmovie-hd.com (Thailand)
-Pelis24.com (Spain/Mexico/Argentina/Venezuela/Peru/Chile)
-Primewire.ag (Switzerland)
-Projectfreetv.at (Romania)
-Putlocker.is (Switzerland/Vietnam)
-Repelis.tv (Mexico/Argentina/Spain/Peru/Venezuela)
-Watchseries.ac (France)

In its closing comments the Hollywood industry group calls on USTR and the U.S. government at large to help combat these threats, either directly or by encouraging foreign nations to take action.

“We strongly support efforts by the U.S. government to work with trading partners to protect and enforce intellectual property rights and, in so doing, protect U.S. jobs,” the MPAA concludes.

MPAA’s full submission is available here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Russia Mulls Downloading Fines if Site Blocking Fails

Post Syndicated from Andy original https://torrentfreak.com/russia-mulls-downloading-fines-if-site-blocking-fails-161008/

cashWell over a decade ago when peer-to-peer file-sharing was in its relative infancy, the RIAA thought it could stop piracy by punishing end users with ‘fines’ and lawsuits.

What followed was a punishing campaign that alienated consumers and ultimately failed to achieve its goals. Only subsequent widespread access to legitimate content proved to be effective in bringing piracy rates down.

But despite improved availability, piracy is alive and well, which means that groups all over the world continue to look for solutions to the problem. More innovation would be nice but Russian authorities appear to be looking into the past.

According to sources cited by Russian news publication RNS, the government is considering introducing a system of fines for Internet users who download copyrighted content without permission.

“It is expected that evidence of a download of an illegal movie, for example, will be shown by providing an IP address, then the offender will be sent the penalty fine,” a source familiar with the inter-agency consultations told the publication.

It’s understood that if ‘pirate’ site-blocking fails, authorities favor the kind of system that German Internet users are already subjected to, with fines up to 1000 euros per logged offense.

“If the initiative with blocking sites that publish illegal content does not work, will be discussing the German model,” the source said.

What isn’t known at this stage is who will be issuing the ‘fines’ or who will benefit from the revenue they create. What is clear, however, is that introducing this kind of system won’t be straightforward.

“There are two ways [to reduce piracy] – to block websites and penalize the user,” the source said. “The second option is effective, but we need to understand the social consequences.”

Reached for comment, Russia’s Ministry of Culture confirmed knowledge of the proposals but said that no formal consultations on how such a system might operate have yet been conducted.

The Ministry of Communications also admitted knowledge of the discussions but a spokesman urged caution.

“I’m not sure that it would be straightforward to implement [a system of fines] in Russia, but it is always the responsibility…of the person who posted the pirated content, and the one who deliberately consumes pirated content,” he said.

“The responsibility, in principle, should be [with these people]. How to implement a system in the Russian reality…that for sure requires a cautious and incremental process.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

IPv6 Support Update – CloudFront, WAF, and S3 Transfer Acceleration

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ipv6-support-update-cloudfront-waf-and-s3-transfer-acceleration/

As a follow-up to our recent announcement of IPv6 support for Amazon S3, I am happy to be able to tell you that IPv6 support is now available for Amazon CloudFront, Amazon S3 Transfer Acceleration, and AWS WAF and that all 50+ CloudFront edge locations now support IPv6. We are enabling IPv6 across all of our Autonomous System Networks (ASNs) in a phased rollout that starts today and will extend across all of the networks over the next few weeks.

CloudFront IPv6 Support
You can now enable IPv6 support for individual Amazon CloudFront distributions. Viewers and networks that connect to a CloudFront edge location over IPv6 will automatically be served content over IPv6. Those that connect over IPv4 will continue to work as before. Connections to your origin servers will be made using IPv4.

Newly created distributions are automatically enabled for IPv6; you can modify an existing distribution by checking Enable IPv6 in the console or setting it via the CloudFront API:

Here are a couple of important things to know about this new feature:

  • Alias Records – After you enable IPv6  support for a distribution, the DNS entry for the distribution will be updated to include an AAAA record. If you are using Amazon Route 53 and an alias record to map all or part of your domain to the distribution, you will need to add an AAAA alias to the domain.
  • Log Files – If you have enabled CloudFront Access Logs, IPv6 addresses will start to show up in the c-ip field; make sure that your log processing system knows what to do with them.
  • Trusted Signers -If you make use of Trusted Signers in conjunction with an IP address whitelist, we strongly recommend the use of an IPv4-only distribution for Trusted Signer URLs that have an IP whitelist and a separate, IPv4/IPv6 distribution for the actual content. This model sidesteps an issue that would arise if the signing request arrived over an IPv4 address and was signed as such, only to have the request for the content arrive via a different, IPv6 address that is not on the whitelist.
  • CloudFormation – CloudFormation support is in the works. With today’s launch, distributions that are created from a CloudFormation template will not be enabled for IPv6. If you update an existing stack, the setting will remain as-is for any distributions referenced in the stack..
  • AWS WAF – If you use AWS WAF in conjunction with CloudFront, be sure to update your WebACLs and your IP rulesets as appropriate in order to whitelist or blacklist IPv6 addresses.
  • Forwarded Headers – When you enable IPv6 for a distribution, the X-Forwarded-For header that is presented to the origin will contain an IPv6 address. You need to make sure that the origin is able to process headers of this form.

To learn more, read IPv6 Support for Amazon CloudFront.

AWS WAF IPv6 Support
AWS WAF helps you to protect your applications from application-layer attacks (read New – AWS WAF to learn more).

AWS WAF can now inspect requests that arrive via IPv4 or IPv6 addresses. You can create web ACLs that match IPv6 addresses, as described in Working with IP Match Conditions:

All existing WAF features will work with IPv6 and there will be no visible change in performance. The IPv6 will appear in the Sampled Requests collected and displayed by WAF:

S3 Transfer Acceleration IPv6 Support
This important new S3 feature (read AWS Storage Update – Amazon S3 Transfer Acceleration + Larger Snowballs in More Regions for more info) now has IPv6 support. You can simply switch to the new dual-stack endpoint for your uploads. Simply change:




Here’s some code that uses the AWS SDK for Java to create a client object and enable dual-stack transfer:

AmazonS3Client s3 = new AmazonS3Client();

Most applications and network stacks will prefer IPv6 automatically, and no further configuration should be required. You should plan to take a look at the IAM policies for your buckets in order to make sure that they will work as expected in conjunction with IPv6 addresses.

To learn more, read about Making Requests to Amazon S3 over IPv6.

Don’t Forget to Test
As a reminder, if IPv6 connectivity to any AWS region is limited or non-existent, IPv4 will be used instead. Also, as I noted in my earlier post, the client system can be configured to support IPv6 but connected to a network that is not configured to route IPv6 packets to the Internet. Therefore, we recommend some application-level testing of end-to-end connectivity before you switch to IPv6.



MOSS supports four more open source projects

Post Syndicated from ris original http://lwn.net/Articles/702567/rss

The Mozilla Open Source Support program has awarded
to four projects this quarter. “On the Foundational
Technology track, we awarded $100,000 to Redash, a tool for building
visualizations of data for better decision-making within organizations, and
$50,000 to Review Board,
software for doing web-based source code review. Both of these pieces of
software are in heavy use at Mozilla. We also awarded $100,000 to Kea, the successor to the venerable ISC
DHCP codebase, which deals with allocation of IP addresses on a
network. Mozilla uses ISC DHCP, which makes funding its replacement a
natural move even though we haven’t deployed it yet. On the Mission
Partners track, we awarded $56,000 to Speech Rule Engine,
a code library which converts mathematical markup into vocalised form
(speech) for the sight-impaired, allowing them to fully appreciate
mathematical and scientific content on the web.
” (Thanks to Paul Wise)

Judge: Vague IP-Address Evidence is Not Enough to Expose BitTorrent ‘Pirates’

Post Syndicated from Ernesto original https://torrentfreak.com/judge-vague-ip-address-evidence-not-enough-expose-bittorrent-pirates-161004/

ipaddress-ip-addressWhile relatively underreported, many U.S. district courts are still swamped with lawsuits against alleged film pirates.

The copyright holders who initiate these cases generally rely on an IP address as evidence. This information is collected from BitTorrent swarms and linked to a geographical location using geolocation tools.

With this information in hand, they then ask the courts to grant a subpoena, forcing Internet providers to hand over the personal details of the associated account holder.

In most cases, courts sign off on these subpoenas quite easily, but in a recent case California Magistrate Judge Mitchell Dembin decided to ask for further clarification and additional evidence.

The case in question was filed by Criminal Productions, the makers of the 2016 movie Criminal, who are linked to the well-known pirate chasers Nu Image and Millennium Films.

The movie makers filed a complaint against a “John Doe” and list an IP-address that, according to a geolocation lookup, is linked to a location in San Diego County.

Magistrate Judge Mitchell Dembin, however, is not ready to issue a subpoena based on that information alone. Specifically, he notes that the complaint lacks details on when the geolocation effort was performed.

If the copyright holder looked up the IP-address information after the infringements the location and ISP info may not be accurate at all, as the assignment may have changed.

“It is most likely that the subscriber is a residential user and the IP address assigned by the ISP is ‘dynamic’. Consequently, it matters when the geolocation was performed,” Judge Dembin writes (pdf).

“If performed in temporal proximity to the offending downloads, the geolocation may be probative of the physical location of the subscriber. If not, less so, potentially to the point of irrelevance,” he adds.

This clarification is indeed important but has never been made before in court, as far as we know.

In the original request, Criminal Productions only writes that the geolocation data was obtained prior to filing the lawsuit, but it’s not clear whether that was at the time of the infringements, which took place several months ago.

“This is not good enough. As much as four months may have passed between the alleged infringement and the geolocation,” Judge Dembin writes.

“Plaintiff must provide the date that geolocation occurred and, if performed closer to the filing date, must provide further support and argument regarding the probative value of the geolocation.”

Based on the missing information the motion for discovery was denied, meaning that Criminal Productions didn’t get the subpoena they were after.

A few days after this denial the filmmakers submitted an amended request providing additional information. However, it was still unclear when the geolocation information was actually obtained, so the Judge denied it again yesterday (pdf).

Denied again


The issue raised in this case is interesting from an accuracy standpoint. Copyright holders in these cases always link an IP-address to a location and ISP, if only to show that the case was filed in the right district. However, they usually don’t say when this geolocation data was obtained.

ISPs do of course keep a log of the IP-address assignment changes. However, the right jurisdiction has to be established before a subpoena is issued.

Judge Dembin therefore suggests that rightsholders should get the information at the time of the infringement, which may be easier said than done. Geolocation databases are far from perfect and most are not updated instantly.

This is something the residents of a Kansas farm know all too well, as their house is the default location of 600 million IP-addresses, which causes them quite a bit of trouble.

Just last month EFF released a whitepaper urging courts to take caution when processing IP-address information. Whether Judge Dembin has read this is unknown, but his actions are definitely in line with the paper’s findings.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Deploy an App to an AWS OpsWorks Layer Using AWS CodePipeline

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx2WKWC9RIY0RD8/Deploy-an-App-to-an-AWS-OpsWorks-Layer-Using-AWS-CodePipeline

Deploy an App to an AWS OpsWorks Layer Using AWS CodePipeline

AWS CodePipeline lets you create continuous delivery pipelines that automatically track code changes from sources such as AWS CodeCommit, Amazon S3, or GitHub. Now, you can use AWS CodePipeline as a code change-management solution for apps, Chef cookbooks, and recipes that you want to deploy with AWS OpsWorks.

This blog post demonstrates how you can create an automated pipeline for a simple Node.js app by using AWS CodePipeline and AWS OpsWorks. After you configure your pipeline, every time you update your Node.js app, AWS CodePipeline passes the updated version to AWS OpsWorks. AWS OpsWorks then deploys the updated app to your fleet of instances, leaving you to focus on improving your application. AWS makes sure that the latest version of your app is deployed.

Step 1: Upload app code to an Amazon S3 bucket

The Amazon S3 bucket must be in the same region in which you later create your pipeline in AWS CodePipeline. For now, AWS CodePipeline supports the AWS OpsWorks provider in the us-east-1 region only; all resources in this blog post should be created in the US East (N. Virginia) region. The bucket must also be versioned, because AWS CodePipeline requires a versioned source. For more information, see Using Versioning.

Upload your app to an Amazon S3 bucket

  1. Download a ZIP file of the AWS OpsWorks sample, Node.js app, and save it to a convenient location on your local computer: https://s3.amazonaws.com/opsworks-codepipeline-demo/opsworks-nodejs-demo-app.zip.
  2. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. Choose Create Bucket. Be sure to enable versioning.
  3. Choose the bucket that you created and upload the ZIP file that you saved in step 1.


  4. In the Properties pane for the uploaded ZIP file, make a note of the S3 link to the file. You will need the bucket name and the ZIP file name portion of this link to create your pipeline.

Step 2: Create an AWS OpsWorks to Amazon EC2 service role

1.     Go to the Identity and Access Management (IAM) service console, and choose Roles.
2.     Choose Create Role, and name it aws-opsworks-ec2-role-with-s3.
3.     In the AWS Service Roles section, choose Amazon EC2, and then choose the policy called AmazonS3ReadOnlyAccess.
4.     The new role should appear in the Roles dashboard.

Step 3: Create an AWS OpsWorks Chef 12 Linux stack

To use AWS OpsWorks as a provider for a pipeline, you must first have an AWS OpsWorks stack, a layer, and at least one instance in the layer. As a reminder, the Amazon S3 bucket to which you uploaded your app must be in the same region in which you later create your AWS OpsWorks stack and pipeline, US East (N. Virginia).

1.     In the OpsWorks console, choose Add Stack, and then choose a Chef 12 stack.
2.     Set the stack’s name to CodePipeline Demo and make sure the Default operating system is set to Linux.
3.     Enable Use custom Chef cookbooks.
4.     For Repository type, choose HTTP Archive, and then use the following cookbook repository on S3: https://s3.amazonaws.com/opsworks-codepipeline-demo/opsworks-nodejs-demo-cookbook.zip. This repository contains a set of Chef cookbooks that include Chef recipes you’ll use to install the Node.js package and its dependencies on your instance. You will use these Chef recipes to deploy the Node.js app that you prepared in step 1.1.

 Step 4: Create and configure an AWS OpsWorks layer

Now that you’ve created an AWS OpsWorks stack called CodePipeline Demo, you can create an OpsWorks layer.

1.     Choose Layers, and then choose Add Layer in the AWS OpsWorks stack view.
2.     Name the layer Node.js App Server. For Short Name, type app1, and then choose Add Layer.
3.     After you create the layer, open the layer’s Recipes tab. In the Deploy lifecycle event, type nodejs_demo. Later, you will link this to a Chef recipe that is part of the Chef cookbook you referenced when you created the stack in step 3.4. This Chef recipe runs every time a new version of your application is deployed.

4.     Now, open the Security tab, choose Edit, and choose AWS-OpsWorks-WebApp from the Security groups drop-down list. You will also need to set the EC2 Instance Profile to use the service role you created in step 2.2 (aws-opsworks-ec2-role-with-s3).

Step 5: Add your App to AWS OpsWorks

Now that your layer is configured, add the Node.js demo app to your AWS OpsWorks stack. When you create the pipeline, you’ll be required to reference this demo Node.js app.

  1. Have the Amazon S3 bucket link from the step 1.4 ready. You will need the link to the bucket in which you stored your test app.
  2. In AWS OpsWorks, open the stack you created (CodePipeline Demo), and in the navigation pane, choose Apps.
  3. Choose Add App.
  4. Provide a name for your demo app (for example, Node.js Demo App), and set the Repository type to an S3 Archive. Paste your S3 bucket link (s3://bucket-name/file name) from step 1.4.
  5. Now that your app appears in the list on the Apps page, add an instance to your OpsWorks layer.

 Step 6: Add an instance to your AWS OpsWorks layer

Before you create a pipeline in AWS CodePipeline, set up at least one instance within the layer you defined in step 4.

  1. Open the stack that you created (CodePipeline Demo), and in the navigation pane, choose Instances.
  2. Choose +Instance, and accept the default settings, including the hostname, size, and subnet. Choose Add Instance.

  1. By default, the instance is in a stopped state. Choose start to start the instance.

Step 7: Create a pipeline in AWS CodePipeline

Now that you have a stack and an app configured in AWS OpsWorks, create a pipeline with AWS OpsWorks as the provider to deploy your app to your specified layer. If you update your app or your Chef deployment recipes, the pipeline runs again automatically, triggering the deployment recipe to run and deploy your updated app.

This procedure creates a simple pipeline that includes only one Source and one Deploy stage. However, you can create more complex pipelines that use AWS OpsWorks as a provider.

To create a pipeline

  1. Open the AWS CodePipeline console in the U.S. East (N. Virginia) region.
  2. Choose Create pipeline.
  3. On the Getting started with AWS CodePipeline page, type MyOpsWorksPipeline, or a pipeline name of your choice, and then choose Next step.
  4. On the Source Location page, choose Amazon S3 from the Source provider drop-down list.
  5. In the Amazon S3 details area, type the Amazon S3 bucket path to your application, in the format s3://bucket-name/file name. Refer to the link you noted in step 1.4. Choose Next step.
  6. On the Build page, choose No Build from the drop-down list, and then choose Next step.
  7. On the Deploy page, choose AWS OpsWorks as the deployment provider.


  8. Specify the names of the stack, layer, and app that you created earlier, then choose Next step.
  9. On the AWS Service Role page, choose Create Role. On the IAM console page that opens, you will see the role that will be created for you (AWS-CodePipeline-Service). From the Policy Name drop-down list, choose Create new policy. Be sure the policy document has the following content, and then choose Allow.
    For more information about the service role and its policy statement, see Attach or Edit a Policy for an IAM Service Role.


  10. On the Review your pipeline page, confirm the choices shown on the page, and then choose Create pipeline.

The pipeline should now start deploying your app to your OpsWorks layer on its own.  Wait for deployment to finish; you’ll know it’s finished when Succeeded is displayed in both the Source and Deploy stages.

Step 8: Verifying the app deployment

To verify that AWS CodePipeline deployed the Node.js app to your layer, sign in to the instance you created in step 4. You should be able to see and use the Node.js web app.

  1. On the AWS OpsWorks dashboard, choose the stack and the layer to which you just deployed your app.
  2. In the navigation pane, choose Instances, and then choose the public IP address of your instance to view the web app. The running app will be displayed in a new browser tab.


  3. To test the app, on the app’s web page, in the Leave a comment text box, type a comment, and then choose Send. The app adds your comment to the web page. You can add more comments to the page, if you like.

Wrap up

You now have a working and fully automated pipeline. As soon as you make changes to your application’s code and update the S3 bucket with the new version of your app, AWS CodePipeline automatically collects the artifact and uses AWS OpsWorks to deploy it to your instance, by running the OpsWorks deployment Chef recipe that you defined on your layer. The deployment recipe starts all of the operations on your instance that are required to support a new version of your artifact.

To learn more about Chef cookbooks and recipes: https://docs.chef.io/cookbooks.html

To learn more about the AWS OpsWorks and AWS CodePipeline integration: https://docs.aws.amazon.com/opsworks/latest/userguide/other-services-cp.html

UK IP Crime Report 2016 Reveals IPTV/Kodi Piracy as Growing Threat

Post Syndicated from Andy original https://torrentfreak.com/uk-ip-crime-report-2016-reveals-iptvkodi-piracy-as-growing-threat-160929/

For more than a decade the IP Crime Group and the Intellectual Property Office have collaborated to produce an assessment of the level of IP crime in the UK. Their annual IP Crime Report details the responses of businesses, anti-piracy groups, and government agencies.

As usual, this year’s report covers all areas of IP crime, both in the physical realm and online. However, it is the latter area that appears to be causing the most concern to participating anti-piracy groups.

“Perhaps the area where IP crime statistics most often reach jaw-dropping levels is in relation to the industries providing digital content,” the report reads.

“During a sample three-month period last year, 28% of those questioned admitted their music downloads in the UK came from illegal sources. Similarly, 23% of films, 22% of software, 16% of TV and 15% of games were downloaded in breach of copyright.”

While noting that illicit music downloads have actually reduced in recent years, the report highlights areas that aren’t doing so well, TV show consumption for example.

“The reasons for the spike in TV copyright infringement appear to be, in part, technological, with ‘unofficial services’ such as uTorrent, BitTorrent, TV catch up apps and established sources such as YouTube offering content without legal certainty,” it adds.

But while several methods of obtaining free TV content online are highlighted in the report, none achieve as much attention as IPTV – commonly known as Kodi with illicit third-party addons.

In her report preamble, Minister for Intellectual Property Baroness Neville-Rolfe describes anti-IPTV collaboration between the Federation Against Copyright Theft, Trading Standards, and the Police, as one of the year’s operational successes. Indeed, FACT say anti-IPTV work is now their top priority.

Federation Against Copyright Theft

“We have prioritised an emerging threat to the audiovisual industry, internet protocol TV (IPTV) boxes,” FACT write.

“In their original form, these boxes are legitimate. However, with the use of apps and add-ons, they allow users to access copyright infringing material, from live TV and sports, to premium pay-for channels and newly released films. Once configured these boxes are illegal.”

FACT say they are concentrating on two areas – raising awareness in the industry and elsewhere while carrying out enforcement and disruption operations.

“In the last year FACT has worked with a wide range of partners and law enforcement bodies to tackle individuals and disrupt businesses selling illegal IPTV boxes. Enforcement action has been widespread across the UK with numerous ongoing investigations,” FACT note.

Overall, FACT say that 70% of the public complaints they receive relate to online copyright infringement. More than a quarter of all complaints now relate to IPTV and 50% of the anti-piracy group’s current investigations involve IPTV boxes.


British Phonographic Industry (BPI)

In their submission to the report, the BPI cite three key areas of concern – online piracy, physical counterfeiting, and Internet-enabled sales of infringing physical content. The former is their top priority.

“The main online piracy threats to the UK recorded music industry at present come from BitTorrent networks, MP3 aggregator sites, cyberlockers, unauthorised streaming sites, stream ripping sites and pirate sites accessed via mobile devices,” the BPI writes.

“Search engines – predominantly Google – also continue to provide millions of links to infringing content and websites that are hosted by non-compliant operators and hosts that cannot be closed down have needed to be blocked in the UK under s.97A court orders (website blocking).”

The BPI notes that between January 2015 and March 2016, it submitted more than 100 million URL takedowns to Google and Bing. Counting all notices since 2011 when the BPI began the practice, the tally now sits at 200 million URLs.

“These astronomic numbers demonstrate the large quantity of infringing content that is available online and which is easily accessible to search engine users,” the BPI says.

On the web-blocking front, the BPI says it now has court orders in place to block 63 pirate sites and more than 700 related URLs, IP addresses and proxies.

“Site blocking is proving a successful strategy, and the longer the blocks are in place, the more effective they tend to be. The latest data available shows that traffic to sites blocked for over one year has reduced by an average of around 80%; with traffic to sites blocked for less than a year reduced by an average of around 50%,” the BPI adds.

Infringement warnings for Internet subscibers

The Get it Right campaign is an educational effort to advise the public on how to avoid pirate sites and spend money on genuine products. The campaign has been somewhat lukewarm thus far, but the sting in the tail has always been the threat of copyright holders sending warnings to Internet pirates.

To date, nothing has materialized on that front but hidden away on page 51 of the report is a hint that something might happen soon.

“A further component of the ‘Get it Right’ campaign is a subscriber alert programme that will, starting by the end of 2016, advise ISPs’ residential subscribers when their accounts are believed to have been used to infringe copyright,” the report reads.

“Account holders will receive an Alert from their ISP, advising them that unlawful uploading of a copyright content file may have taken place on their internet connection and offering advice on where to find legitimate sources of content.”

Overall, the tone of the report suggests a huge threat from IP crime but one that’s being effectively tackled by groups such as FACT, BPI, the Police Intellectual Property Crime Unit, and various educational initiatives. Only time will tell if next year’s report will retain the optimism.

The full report can be downloaded here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Some technical notes on the PlayPen case

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/09/some-technical-notes-on-playpen-case.html

In March of 2015, the FBI took control of a Tor onion childporn website (“PlayPen”), then used an 0day exploit to upload malware to visitors’s computers, to identify them. There is some controversy over the warrant they used, and government mass hacking in general. However, much of the discussion misses some technical details, which I thought I’d discuss here.

IP address
In a post on the case, Orin Kerr claims:
retrieving IP addresses is clearly a search

He is wrong, at least, in the general case. Uploading malware to gather other things (hostname, username, MAC address) is clearly a search. But discovering the IP address is a different thing.
Today’s homes contain many devices behind a single router. The home has only one public IP address, that of the router. All the other devices have local IP addresses. The router then does network address translation (NAT) in order to convert outgoing traffic to all use the public IP address.
The FBI sought the public IP address of the NAT/router, not the local IP address of the perp’s computer. The malware (“NIT”) didn’t search the computer for the IP address. Instead the NIT generated network traffic, destined to the FBI’s computers. The FBI discovered the suspect’s public IP address by looking at their own computers.
Historically, there have been similar ways of getting this IP address (from a Tor hidden user) without “hacking”. In the past, Tor used to leak DNS lookups, which would often lead to the user’s ISP, or to the user’s IP address itself. Another technique would be to provide rich content files (like PDF) or video files that the user would have to be downloaded to view, and which then would contact the Internet (contacting the FBI’s computers) themselves bypassing Tor.
Since the Fourth Amendment is about where the search happens, and not what is discovered, it’s not a search to find the IP address in packets arriving at FBI servers. How the FBI discovered the IP address may be a search (running malware on the suspect’s computer), but the public IP address itself doesn’t necessarily mean a search happened.

Of course, uploading malware just to transmit packets to an FBI server, getting the IP address from the packets, it’s still problematic. It’s gotta be something that requires a warrant, even though it’s not precisely the malware searching the machine for its IP address.

In any event, if not for the IP address, then PlayPen searches still happened for the hostname, username, and MAC address. Imagine the FBI gets a search warrant, shows up at the suspect’s house, and finds no child porn. They then look at the WiFi router, and find that suspected MAC address is indeed connected. They then use other tools to find that the device with that MAC address is located in the neighbor’s house — who has been piggybacking off the WiFi.
It’s a pre-crime warrant (#MinorityReport)
The warrant allows the exploit/malware/search to be used whenever somebody logs in with a username and password.
The key thing here is that the warrant includes people who have not yet created an account on the server at the time the warrant is written. They will connect, create an account, log in, then start accessing the site.
In other words, the warrant includes people who have never committed a crime when the warrant was issued, but who first commit the crime after the warrant. It’s a pre-crime warrant. 
Sure, it’s possible in any warrant to catch pre-crime. For example, a warrant for a drug dealer may also catch a teenager making their first purchase of drugs. But this seems quantitatively different. It’s not targeting the known/suspected criminal — it’s targeting future criminals.
This could easily be solved by limiting the warrant to only accounts that have already been created on the server.
It’s more than an anticipatory warrant

People keep saying it’s an anticipatory warrant, as if this explains everything.

I’m not a lawyer, but even I can see that this explains only that the warrant anticipates future probable cause. “Anticipatory warrant” doesn’t explain that the warrant also anticipates future place to be searched. As far as I can tell, “anticipatory place” warrants don’t exist and are a clear violation of the Fourth Amendment. It makes it look like a “general warrant”, which the Fourth Amendment was designed to prevent.

Orin’s post includes some “unknown place” examples — but those specify something else in particular. A roving wiretap names a person, and the “place” is whatever phone they use. In contrast, this PlayPen warrant names no person. Orin thinks that the problem may be that more than one person is involved, but he is wrong. A warrant can (presumably) name multiple people, or you can have multiple warrants, one for each person. Instead, the problem here is that no person is named. It’s not “Rob’s computer”, it’s “the computer of whoever logs in”. Even if the warrant were ultimately for a single person, it’d still be problematic because the person is not identified.
Orin cites another case, where the FBI places a beeper into a package in order to track it. The place, in this case, is the package. Again, this is nowhere close to this case, where no specific/particular place is mentioned, only a type of place. 
This could easily have been resolved. Most accounts were created before the warrant was issued. The warrant could simply have listed all the usernames, saying the computers of those using these accounts are the places to search. It’s a long list of usernames (1,500?), but if you can’t include them all in a single warrant, in this day and age of automation, I’d imagine you could easily create 1,500 warrants.
It’s malware

As a techy, the name for what the FBI did is “hacking”, and the name for their software is “malware” not “NIT”. The definitions don’t change depending upon who’s doing it and for what purpose. That the FBI uses weasel words to distract from what it’s doing seems like a violation of some sort of principle.

I am not a lawyer, I am a revolutionary. I care less about precedent and more about how a Police State might abuse technology. That a warrant can be issued whose condition is similar “whoever logs into the server” seems like a scary potential for abuse. That a warrant can be designed to catch pre-crime seems even scarier, like science fiction. That a warrant might not be issued for something called “malware”, but would be issued for something called “NIT”, scares me the most.
This warrant could easily have been narrower. It could have listed all the existing account holders. It could’ve been even narrower, for account holders where the server logs prove they’ve already downloaded child porn.
Even then, we need to be worried about FBI mass hacking. I agree that FBI has good reason to keep the 0day secret, and that it’s not meaningful to the defense. But in general, I think courts should demand an overabundance of transparency — the police could be doing something nefarious, so the courts should demand transparency to prevent that.

Beware: Attribution & Politics

Post Syndicated from Elizabeth Wharton original http://blog.erratasec.com/2016/09/beware-attribution-politics.html

tl;dr – Digital location data can be inherently wrong and it can be spoofed. Blindly assuming that it is accurate can make an ass out of you on twitter and when regulating drones.    

Guest contributor and friend of Errata Security Elizabeth Wharton (@LawyerLiz) is an attorney and host of the technology-focused weekly radio show “Buzz Off with Lawyer Liz” on America’s Web Radio (listen live  each Wednesday, 2-3:00pm eastern; find  prior podcasts here or via iTunes – Lawyer Liz) This post is merely her musings and not legal advice.

Filtering through various campaign and debate analysis on social media, a tweet caught my eye. The message itself was not the concern and the underlying image has since been determined to be fake.  Rather, I was stopped by the140 character tweet’s absolute certainty that internet user location data is infallible.  The author presented a data map as proof without question, caveat, or other investigation.  Boom, mic drop – attribution!

According to the tweeting pundit, “Russian trollbots” are behind the #TrumpWon hashtag trending on Twitter.

The proof? The twitter post claims that the Trendsmap showed the initial hashtag tweets as originating from accounts located in Russia.  Within the first hour the tweet and accompanying map graphic was “liked” 1,400 times and retweeted 1,495 times. A gotcha moment because a pew-pew map showed that the #TrumpWon hashtag originated from Twitter accounts located in Russia.  Boom, mic drop – attribution!

Except, not so fast. First, Trendsmap has since clarified that the map and data in the tweet above are not theirs (the Washington Post details the faked data/map ).  Moreover, location data is tricky.  According to the Trendsmap FAQ page they use the location provided in a user’s profile and GeoIP provided by Google. Google’s GeoIP is crafted using a proprietary system and other databases such as MaxMind.  IP mapping is not an exact art.  Kashmir Hill, editor of Fusion’s Real Future, and David Maynor, delved into the issues and inaccuracies of IP mapping earlier this year.  Kashmir wrote extensively on their findings and how phantom IP addresses and MaxMind’s use of randomly selected default locations created digital hells for individuals all over the country –  Internet Mapping Glitch Turned Random Farm into Digital Hell.

Reliance on such mapping and location information as an absolute has tripped up law enforcement and is poised to trip up the drone industry. Certain lawmakers like to point to geofencing and other location applications as security and safety cure-all solutions. Sen. Schumer (D-N.Y.) previously included geofencing as a key element of his 2015 drone safety bill.  Geofencing as a safety measure was mentioned during Tuesday’s U.S. House Small Business Committee hearing on Commercial Drone Operations. With geofencing, the drone is programmed to prohibit operations above a certain height or to keep out of certain locations.  Attempt to fly in a prohibited area and the aircraft will automatically shut down.  Geofencing relies on location data, including geospatial data collected from a variety of sources.  As seen with GeoIP, data can be wrong.  Additionally, the data must be interpreted and analyzed by the aircraft’s software systems.  Aircraft systems are not built with security first, in some cases basic systems security measures have been completely overlooked.  With mandatory geofencing, wrong data or spoofed (hacked) data can ground the aircraft.

Location mapping is helpful, one data point among many.  Beware of attribution and laws predicated solely on information that can be inaccurate by design. One errant political tweet blaming Russian twitter users based on bad data may lead to a “Pants on Fire” fact check.  Even if initially correct, a bored 400lb hacker may have spoofed the data.

(post updated to add link to “Buzz Off with Lawyer Liz Show” website and pic per Rob’s request)

ISPs Offered Service to “Protect Safe Harbor” Under DMCA

Post Syndicated from Andy original https://torrentfreak.com/isps-offered-service-to-protect-safe-harbor-under-dmca-160924/

warningEarly August, a federal court in Virginia found Internet service provider Cox Communications liable for copyright infringements carried out by its customers.

The ISP was found guilty of willful contributory copyright infringement and ordered to pay music publisher BMG Rights Management $25 million in damages.

The case was first filed in 2014 after it was alleged that Cox failed to pass on infringement notices sent to the ISP by anti-piracy outfit Rightscorp. It was determined that the ISP had also failed to take firm action against repeat infringers.

Although the decision is still open to appeal, the ruling has ISPs in the United States on their toes. None will want to fall into the same trap as Cox and are probably handling infringement complaints carefully as a result. This is where Colorado-based Subsentio wants to step in.

Subsentio specializes in helping companies meet their obligations under CALEA, the Communications Assistance for Law Enforcement Act, a wiretapping law passed in 1994. It believes these skills can also help ISPs to retain their safe harbor protection under the DMCA.

This week Subsentio launched DMCA Records Production, a service that gives ISPs the opportunity to outsource the sending and management of copyright infringement notices.

“With the average ISP receiving thousands of notices every month from owners of copyrighted content, falling behind on DMCA procedural obligations is not an option,” says Martin McDermott, Chief Operating Officer at Subsentio.

“The record award of US$25 million paid by one ISP for DMCA violations last year was a ‘wake-up call’ — service providers that fail to take this law seriously can face the same legal and financial consequences.”

Subsentio Legal Services Manager Michael Allison informs TorrentFreak that increasing levels of DMCA notices received by ISPs need to be handled effectively.

“Since content owners leverage bots to crawl the internet for copyrighted content, the volume of DMCA claims falling at the footsteps of ISPs has been on the rise. The small to mid-level ISPs receive hundreds to thousands of claims per month,” Allison says.

“This volume may be too high to add to the responsibilities of a [network operations center] or abuse team. At the same time, the volume might not constitute the hiring of a full-time employee or staff.”

Allison says that his company handles legal records production for a number of ISP clients and part of that process involves tying allegedly infringing IP addresses and timestamps to ISP subscribers.

“The logistics behind tying a target IP address and timestamp to a specific subscriber is usually an administrative and laborious process. But it’s a method we’re familiar with and it’s a procedure that’s inherent to processing any DMCA claim.”

Allison says that the Cox decision put ISPs on notice that they must have a defined policy addressing DMCA claims, including provisions for dealing with repeat infringers, up to and including termination. He believes the Subsentio system can help ISPs achieve those goals.

“[Our system] automatically creates a unique case for each legal request received, facilitates document generation, notates actions taken on the case, and allows for customized reporting via any number of variables tied to the case,” Allison explains.

“By creating a case for each DMCA claim received, we can track ISP subscribers for repeat offenses, apply escalation measures when needed, and alert ISPs when a subscriber has met the qualifications for termination.”

TF understands that many of Subsentio’s current clients are small to mid-size regional ISPs in the United States so whether any of the big national ISPs will get involved remains to be seen. Nevertheless, it’s quite possible that the formalizing and outsourcing of subscriber warnings will lead to customers of some smaller ISPs enjoying significantly less slack than they’ve become accustomed to.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Two free HiveMQ plugins you don’t want to miss: Client Status and Retained Messages Query Plugin

Post Syndicated from The HiveMQ Team original http://www.hivemq.com/blog/two-free-hivemq-plugins-you-dont-want-to-miss-client-status-and-retained-messages-query-plugin/


Our friends at ART+COM just released two of their HiveMQ plugins to the public. They are available on Github and may be useful to many MQTT deployments that need to utilize HTTP as an information channel.

HiveMQ Client Status Plugin

This plugin is available at Github.

This plugin exposes a HTTP GET endpoint to retrieve the currently connected and disconnected clients with their respective client identifiers and IP addresses. It is useful if you need to quickly check what clients are connected to the broker.

It creates a new HTTP GET endpoint


and returns a HTTP Response like this:

[ {
  "clientId" : "another client",
  "ip" : "",
  "connected" : true
}, {
  "clientId" : "my-inactive-client",
  "connected" : false
} ]

HiveMQ Retained Message Query Plugin

This plugin is available at Github.

This plugin allows to query retained messages via HTTP instead of MQTT from HiveMQ. You can read about the motivation to write this plugin by the folks at ART+COM in the blog.

In short, this plugin exposes a HTTP Endpoint that expects a POST request with a query.

An example query could look like this:

  "topic": "path/of/the/topic",
  "depth": 0,
  "flatten": false

A response would look like this:

  "topic": "path/of/the/topic",
  "payload": "23",
  "children": [
	  "topic": "path/of/child/topic",
	  "payload": "\"foo\""

If you’re a MQTT.JS user, there is a small library called mqtt-topping by the ART+COM people that adds syntactic sugar for this plugin.

We hope you like the plugins as much as we do. Let us know in the comments what you think!

Malicious Torrent Network Tool Revealed By Security Company

Post Syndicated from Andy original https://torrentfreak.com/malicious-torrent-network-tool-revealed-by-security-company-160921/

danger-p2pMore than 35 years after 15-year-old high school student Rich Skrenta created the first publicly spread virus, millions of pieces of malware are being spread around the world.

Attackers’ motives are varied but these days they’re often working for financial gain. As a result, popular websites and their users are regularly targeted. Security company InfoArmor has just published a report detailing a particularly interesting threat which homes in on torrent site users.

“InfoArmor has identified a special tool used by cybercriminals to distribute malware by packaging it with the most popular torrent files on the Internet,” the company reports.

InfoArmor says the so-called “RAUM” tool is being offered via “underground affiliate networks” with attackers being financially incentivized to spread the malicious software through infected torrent files.

“Members of these networks are invited by special invitation only, with strict verification of each new member,” the company reports.

InfoArmor says that the attackers’ infrastructure has a monitoring system in place which allows them to track the latest trends in downloading, presumably so that attacks can reach the greatest numbers of victims.

“The bad actors have analyzed trends on video, audio, software and other digital content downloads from around the globe and have created seeds on famous torrent trackers using weaponized torrents packaged with malicious code,” they explain.

RAUM instances were associated with a range of malware including CryptXXX, CTB-Locker and Cerber, online-banking Trojan Dridex and password stealing spyware Pony.

“We have identified in excess of 1,639,000 records collected in the past few months from the infected victims with various credentials to online-services, gaming, social media, corporate resources and exfiltrated data from the uncovered network,” InfoArmor reveals.

What is perhaps most interesting about InfoArmor’s research is how it shines light on the operation of RAUM behind the scenes. The company has published a screenshot which claims to show the system’s dashboard, featuring infected torrents on several sites, a ‘fake’ Pirate Bay site in particular.


“Threat actors were systematically monitoring the status of the created malicious seeds on famous torrent trackers such as The Pirate Bay, ExtraTorrent and many others,” the researchers write.

“In some cases, they were specifically looking for compromised accounts of other users on these online communities that were extracted from botnet logs in order to use them for new seeds on behalf of the affected victims without their knowledge, thus increasing the reputation of the uploaded files.”


According to InfoArmor the malware was initially spread using uTorrent, although any client could have done the job. More recently, however, new seeds have been served through online servers and some hacked devices.

In some cases the malicious files continued to be seeded for more than 1.5 months. Tests by TF on the sample provided showed that most of the files listed have now been removed by the sites in question.

Completely unsurprisingly, people who use torrent sites to obtain software and games (as opposed to video and music files) are those most likely to come into contact with RAUM and associated malware. As the image below shows, Windows 7 and 10 packs and their activators feature prominently.


“All of the created malicious seeds were monitored by cybercriminals in order to prevent early detection by [anti-virus software] and had different statuses such as ‘closed,’ ‘alive,’ and ‘detected by antivirus.’ Some of the identified elements of their infrastructure were hosted in the TOR network,” InfoArmor explains.

The researchers say that RAUM is a tool used by an Eastern European organized crime group known as Black Team. They also report several URLs and IP addresses from where the team operates. We won’t publish them here but it’s of some comfort to know that between Chrome, Firefox and MalwareBytes protection, all were successfully blocked on our test machine.

InfoArmor concludes by warning users to exercise extreme caution when downloading pirated digital content. We’d go a step further and advise people to be wary of installing all software from any untrusted sources, no matter where they’re found online.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.