Tag Archives: search engines

GoMovies/123Movies Launches Anime Streaming Site

Post Syndicated from Ernesto original https://torrentfreak.com/gomovies123movies-launches-anime-streaming-site-171204/

Pirate video streaming sites are booming. Their relative ease of use through on-demand viewing makes them a viable alternative to P2P file-sharing, which traditionally dominated the piracy arena.

The popular movie streaming site GoMovies, formerly known as 123movies, is one of the most-used streaming sites. Despite the rebranding and several domain changes, it has built a steady base of millions of users over the past year and a half.

And it’s not done yet, we learn today.

The site, currently operating from the Gostream.is domain name, recently launched a new spinoff targeting anime fans. Animehub.to is currently promoted on GoMovies and the site’s operators aim to turn it into the leading streaming site for anime content.

Animehub.to

Someone connected to GoMovies told us that they’ve received a lot of requests from users to add anime content. Anime has traditionally been a large niche on file-sharing sites and the same is true on streaming platforms.

Technically speaking, GoMovies could have easily filled up the original site with anime content, but the owners prefer a different outlet.

With a separate anime site, they hope to draw in more visitors, TorrentFreak was told by an insider. For one, this makes it possible to rank better in search engines. It also allows the operators to cater specifically to the anime audience, with anime specific categories and release schedules.

Anime copyright holders will not be pleased with the new initiative, that’s for sure, but GoMovies is not new to legal pressure.

Earlier this year the US Ambassador to Vietnam called on the local Government to criminally prosecute people behind 123movies, the previous iteration of the site. In addition, the MPAA reported the site to the US Government in its recent overview of notorious pirate sites.

Pressure or not, it appears that GoMovies has no intention of slowing down or changing its course, although we’ve heard that yet another rebranding is on the horizon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

GDPR – A Practical Guide For Developers

Post Syndicated from Bozho original https://techblog.bozho.net/gdpr-practical-guide-developers/

You’ve probably heard about GDPR. The new European data protection regulation that applies practically to everyone. Especially if you are working in a big company, it’s most likely that there’s already a process for gettign your systems in compliance with the regulation.

The regulation is basically a law that must be followed in all European countries (but also applies to non-EU companies that have users in the EU). In this particular case, it applies to companies that are not registered in Europe, but are having European customers. So that’s most companies. I will not go into yet another “12 facts about GDPR” or “7 myths about GDPR” posts/whitepapers, as they are often aimed at managers or legal people. Instead, I’ll focus on what GDPR means for developers.

Why am I qualified to do that? A few reasons – I was advisor to the deputy prime minister of a EU country, and because of that I’ve been both exposed and myself wrote some legislation. I’m familiar with the “legalese” and how the regulatory framework operates in general. I’m also a privacy advocate and I’ve been writing about GDPR-related stuff in the past, i.e. “before it was cool” (protecting sensitive data, the right to be forgotten). And finally, I’m currently working on a project that (among other things) aims to help with covering some GDPR aspects.

I’ll try to be a bit more comprehensive this time and cover as many aspects of the regulation that concern developers as I can. And while developers will mostly be concerned about how the systems they are working on have to change, it’s not unlikely that a less informed manager storms in in late spring, realizing GDPR is going to be in force tomorrow, asking “what should we do to get our system/website compliant”.

The rights of the user/client (referred to as “data subject” in the regulation) that I think are relevant for developers are: the right to erasure (the right to be forgotten/deleted from the system), right to restriction of processing (you still keep the data, but mark it as “restricted” and don’t touch it without further consent by the user), the right to data portability (the ability to export one’s data), the right to rectification (the ability to get personal data fixed), the right to be informed (getting human-readable information, rather than long terms and conditions), the right of access (the user should be able to see all the data you have about them), the right to data portability (the user should be able to get a machine-readable dump of their data).

Additionally, the relevant basic principles are: data minimization (one should not collect more data than necessary), integrity and confidentiality (all security measures to protect data that you can think of + measures to guarantee that the data has not been inappropriately modified).

Even further, the regulation requires certain processes to be in place within an organization (of more than 250 employees or if a significant amount of data is processed), and those include keeping a record of all types of processing activities carried out, including transfers to processors (3rd parties), which includes cloud service providers. None of the other requirements of the regulation have an exception depending on the organization size, so “I’m small, GDPR does not concern me” is a myth.

It is important to know what “personal data” is. Basically, it’s every piece of data that can be used to uniquely identify a person or data that is about an already identified person. It’s data that the user has explicitly provided, but also data that you have collected about them from either 3rd parties or based on their activities on the site (what they’ve been looking at, what they’ve purchased, etc.)

Having said that, I’ll list a number of features that will have to be implemented and some hints on how to do that, followed by some do’s and don’t’s.

  • “Forget me” – you should have a method that takes a userId and deletes all personal data about that user (in case they have been collected on the basis of consent, and not due to contract enforcement or legal obligation). It is actually useful for integration tests to have that feature (to cleanup after the test), but it may be hard to implement depending on the data model. In a regular data model, deleting a record may be easy, but some foreign keys may be violated. That means you have two options – either make sure you allow nullable foreign keys (for example an order usually has a reference to the user that made it, but when the user requests his data be deleted, you can set the userId to null), or make sure you delete all related data (e.g. via cascades). This may not be desirable, e.g. if the order is used to track available quantities or for accounting purposes. It’s a bit trickier for event-sourcing data models, or in extreme cases, ones that include some sort of blcokchain/hash chain/tamper-evident data structure. With event sourcing you should be able to remove a past event and re-generate intermediate snapshots. For blockchain-like structures – be careful what you put in there and avoid putting personal data of users. There is an option to use a chameleon hash function, but that’s suboptimal. Overall, you must constantly think of how you can delete the personal data. And “our data model doesn’t allow it” isn’t an excuse.
  • Notify 3rd parties for erasure – deleting things from your system may be one thing, but you are also obligated to inform all third parties that you have pushed that data to. So if you have sent personal data to, say, Salesforce, Hubspot, twitter, or any cloud service provider, you should call an API of theirs that allows for the deletion of personal data. If you are such a provider, obviously, your “forget me” endpoint should be exposed. Calling the 3rd party APIs to remove data is not the full story, though. You also have to make sure the information does not appear in search results. Now, that’s tricky, as Google doesn’t have an API for removal, only a manual process. Fortunately, it’s only about public profile pages that are crawlable by Google (and other search engines, okay…), but you still have to take measures. Ideally, you should make the personal data page return a 404 HTTP status, so that it can be removed.
  • Restrict processing – in your admin panel where there’s a list of users, there should be a button “restrict processing”. The user settings page should also have that button. When clicked (after reading the appropriate information), it should mark the profile as restricted. That means it should no longer be visible to the backoffice staff, or publicly. You can implement that with a simple “restricted” flag in the users table and a few if-clasues here and there.
  • Export data – there should be another button – “export data”. When clicked, the user should receive all the data that you hold about them. What exactly is that data – depends on the particular usecase. Usually it’s at least the data that you delete with the “forget me” functionality, but may include additional data (e.g. the orders the user has made may not be delete, but should be included in the dump). The structure of the dump is not strictly defined, but my recommendation would be to reuse schema.org definitions as much as possible, for either JSON or XML. If the data is simple enough, a CSV/XLS export would also be fine. Sometimes data export can take a long time, so the button can trigger a background process, which would then notify the user via email when his data is ready (twitter, for example, does that already – you can request all your tweets and you get them after a while).
  • Allow users to edit their profile – this seems an obvious rule, but it isn’t always followed. Users must be able to fix all data about them, including data that you have collected from other sources (e.g. using a “login with facebook” you may have fetched their name and address). Rule of thumb – all the fields in your “users” table should be editable via the UI. Technically, rectification can be done via a manual support process, but that’s normally more expensive for a business than just having the form to do it. There is one other scenario, however, when you’ve obtained the data from other sources (i.e. the user hasn’t provided their details to you directly). In that case there should still be a page where they can identify somehow (via email and/or sms confirmation) and get access to the data about them.
  • Consent checkboxes – this is in my opinion the biggest change that the regulation brings. “I accept the terms and conditions” would no longer be sufficient to claim that the user has given their consent for processing their data. So, for each particular processing activity there should be a separate checkbox on the registration (or user profile) screen. You should keep these consent checkboxes in separate columns in the database, and let the users withdraw their consent (by unchecking these checkboxes from their profile page – see the previous point). Ideally, these checkboxes should come directly from the register of processing activities (if you keep one). Note that the checkboxes should not be preselected, as this does not count as “consent”.
  • Re-request consent – if the consent users have given was not clear (e.g. if they simply agreed to terms & conditions), you’d have to re-obtain that consent. So prepare a functionality for mass-emailing your users to ask them to go to their profile page and check all the checkboxes for the personal data processing activities that you have.
  • “See all my data” – this is very similar to the “Export” button, except data should be displayed in the regular UI of the application rather than an XML/JSON format. For example, Google Maps shows you your location history – all the places that you’ve been to. It is a good implementation of the right to access. (Though Google is very far from perfect when privacy is concerned)
  • Age checks – you should ask for the user’s age, and if the user is a child (below 16), you should ask for parent permission. There’s no clear way how to do that, but my suggestion is to introduce a flow, where the child should specify the email of a parent, who can then confirm. Obviosuly, children will just cheat with their birthdate, or provide a fake parent email, but you will most likely have done your job according to the regulation (this is one of the “wishful thinking” aspects of the regulation).

Now some “do’s”, which are mostly about the technical measures needed to protect personal data. They may be more “ops” than “dev”, but often the application also has to be extended to support them. I’ve listed most of what I could think of in a previous post.

  • Encrypt the data in transit. That means that communication between your application layer and your database (or your message queue, or whatever component you have) should be over TLS. The certificates could be self-signed (and possibly pinned), or you could have an internal CA. Different databases have different configurations, just google “X encrypted connections. Some databases need gossiping among the nodes – that should also be configured to use encryption
  • Encrypt the data at rest – this again depends on the database (some offer table-level encryption), but can also be done on machine-level. E.g. using LUKS. The private key can be stored in your infrastructure, or in some cloud service like AWS KMS.
  • Encrypt your backups – kind of obvious
  • Implement pseudonymisation – the most obvious use-case is when you want to use production data for the test/staging servers. You should change the personal data to some “pseudonym”, so that the people cannot be identified. When you push data for machine learning purposes (to third parties or not), you can also do that. Technically, that could mean that your User object can have a “pseudonymize” method which applies hash+salt/bcrypt/PBKDF2 for some of the data that can be used to identify a person
  • Protect data integrity – this is a very broad thing, and could simply mean “have authentication mechanisms for modifying data”. But you can do something more, even as simple as a checksum, or a more complicated solution (like the one I’m working on). It depends on the stakes, on the way data is accessed, on the particular system, etc. The checksum can be in the form of a hash of all the data in a given database record, which should be updated each time the record is updated through the application. It isn’t a strong guarantee, but it is at least something.
  • Have your GDPR register of processing activities in something other than Excel – Article 30 says that you should keep a record of all the types of activities that you use personal data for. That sounds like bureaucracy, but it may be useful – you will be able to link certain aspects of your application with that register (e.g. the consent checkboxes, or your audit trail records). It wouldn’t take much time to implement a simple register, but the business requirements for that should come from whoever is responsible for the GDPR compliance. But you can advise them that having it in Excel won’t make it easy for you as a developer (imagine having to fetch the excel file internally, so that you can parse it and implement a feature). Such a register could be a microservice/small application deployed separately in your infrastructure.
  • Log access to personal data – every read operation on a personal data record should be logged, so that you know who accessed what and for what purpose
  • Register all API consumers – you shouldn’t allow anonymous API access to personal data. I’d say you should request the organization name and contact person for each API user upon registration, and add those to the data processing register. Note: some have treated article 30 as a requirement to keep an audit log. I don’t think it is saying that – instead it requires 250+ companies to keep a register of the types of processing activities (i.e. what you use the data for). There are other articles in the regulation that imply that keeping an audit log is a best practice (for protecting the integrity of the data as well as to make sure it hasn’t been processed without a valid reason)

Finally, some “don’t’s”.

  • Don’t use data for purposes that the user hasn’t agreed with – that’s supposed to be the spirit of the regulation. If you want to expose a new API to a new type of clients, or you want to use the data for some machine learning, or you decide to add ads to your site based on users’ behaviour, or sell your database to a 3rd party – think twice. I would imagine your register of processing activities could have a button to send notification emails to users to ask them for permission when a new processing activity is added (or if you use a 3rd party register, it should probably give you an API). So upon adding a new processing activity (and adding that to your register), mass email all users from whom you’d like consent.
  • Don’t log personal data – getting rid of the personal data from log files (especially if they are shipped to a 3rd party service) can be tedious or even impossible. So log just identifiers if needed. And make sure old logs files are cleaned up, just in case
  • Don’t put fields on the registration/profile form that you don’t need – it’s always tempting to just throw as many fields as the usability person/designer agrees on, but unless you absolutely need the data for delivering your service, you shouldn’t collect it. Names you should probably always collect, but unless you are delivering something, a home address or phone is unnecessary.
  • Don’t assume 3rd parties are compliant – you are responsible if there’s a data breach in one of the 3rd parties (e.g. “processors”) to which you send personal data. So before you send data via an API to another service, make sure they have at least a basic level of data protection. If they don’t, raise a flag with management.
  • Don’t assume having ISO XXX makes you compliant – information security standards and even personal data standards are a good start and they will probably 70% of what the regulation requires, but they are not sufficient – most of the things listed above are not covered in any of those standards

Overall, the purpose of the regulation is to make you take conscious decisions when processing personal data. It imposes best practices in a legal way. If you follow the above advice and design your data model, storage, data flow , API calls with data protection in mind, then you shouldn’t worry about the huge fines that the regulation prescribes – they are for extreme cases, like Equifax for example. Regulators (data protection authorities) will most likely have some checklists into which you’d have to somehow fit, but if you follow best practices, that shouldn’t be an issue.

I think all of the above features can be implemented in a few weeks by a small team. Be suspicious when a big vendor offers you a generic plug-and-play “GDPR compliance” solution. GDPR is not just about the technical aspects listed above – it does have organizational/process implications. But also be suspicious if a consultant claims GDPR is complicated. It’s not – it relies on a few basic principles that are in fact best practices anyway. Just don’t ignore them.

The post GDPR – A Practical Guide For Developers appeared first on Bozho's tech blog.

Google Wipes 786 Pirate Sites From Search Results

Post Syndicated from Andy original https://torrentfreak.com/google-wipes-786-pirate-sites-from-search-results-171121/

Late July, President Vladimir Putin signed a new law which requires local telecoms watchdog Rozcomnadzor to maintain a list of banned domains while identifying sites, services, and software that provide access to them.

Rozcomnadzor is required to contact the operators of such services with a request for them to block banned resources. If they do not, then they themselves will become blocked. In addition, search engines are also required to remove blocked resources from their search results, in order to discourage people from accessing them.

Removing entire domains from search results is a controversial practice and something which search providers have long protested against. They argue that it’s not their job to act as censors and in any event, content remains online, whether it’s indexed by search or not.

Nevertheless, on October 1 the new law (“On Information, Information Technologies and Information Protection”) came into effect and it appears that Russia’s major search engines have been very busy in its wake.

According to a report from Rozcomnadzor, search providers Google, Yandex, Mail.ru, Rambler, and Sputnik have stopped presenting information in results for sites that have been permanently blocked by ISPs following a decision by the Moscow City Court.

“To date, search engines have stopped access to 786 pirate sites listed in the register of Internet resources which contain content distributed in violation of intellectual property rights,” the watchdog reports.

The domains aren’t being named by Rozcomnadzor or the search engines but are almost definitely those sites that have had complaints filed against them at the City Court on multiple occasions but have failed to take remedial action. Also included will be mirror and proxy sites which either replicate or facilitate access to these blocked and apparently defiant domains.

The news comes in the wake of reports earlier this month that Russia is considering a rapid site blocking mechanism that could see domains rendered inaccessible within 24 hours, without any parties having to attend a court hearing.

While it’s now extremely clear that Russia has one of the most aggressive site-blocking regimes in the world, with both ISPs and search engines required to prevent access to infringing sites, it’s uncertain whether these measures will be enough to tackle rampant online piracy.

New research published in October by Group-IB revealed that despite thousands of domains being blocked, last year the market for pirate video in Russia more than doubled.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Danes Deploy ‘Disruption Machine’ to Curb Online Piracy

Post Syndicated from Ernesto original https://torrentfreak.com/danes-deploy-disruption-machine-to-curb-online-piracy-171119/

Over the years copyright holders have tried a multitude of measures to curb copyright infringement, with varying levels of success.

By now it’s well known that blocking or even shutting down a pirate site doesn’t help much. As long as there are alternatives, people will simply continue to download or stream elsewhere.

Increasingly, major entertainment industry companies are calling for a broader and more coordinated response. They would like to see ISPs, payment processors, advertisers, search engines, and social media companies assisting in their anti-piracy efforts. Voluntarily, or even with a legal incentive, if required.

In Denmark, local anti-piracy group RettighedsAlliancen has a similar goal and they are starting to make progress. The outfit is actively building a piracy “disruption machine” that tackles the issue from as many sides as it can.

The disruption machine is built around an Infringing Website List (IWL), which is not related to a similarly-named initiative from the UK police. This list is made up of pirate sites that have been found to facilitate copyright infringement by a Danish court.

“The IWL is a part of the disruption machine that RettighedsAlliancen has developed in collaboration with many stakeholders in the online community,” the group’s CEO Maria Fredenslund tells TorrentFreak.

The stakeholders include major ISPs, but also media companies, MasterCard, Google, and Microsoft. With help from the local government they signed a Memorandum of Understanding. Their goal is to make the internet a safe and legitimate platform for consumers and businesses while limiting copyright infringement and associated crime.

MoU signees

There are currently twelve court orders on which the list is based and two more are expected to come in before the end of the year. As a result, approximately 600 pirate sites are on the IWL, making them harder to find.

Every time a new court order is handed down, RettighedsAlliancen distributes an updated list to their the network of stakeholders.

“Currently, all major ISPs in Denmark have agreed to implement the IWL in their systems based on a joint Code of Conduct. This means that all the ISPs jointly will block their customers access to infringing services thus amplifying the impact of a blocking order by magnitudes,” Fredenslund explains.

Thus far ISPs are actively blocking 100 pirate sites, resulting in significant traffic drops. The rest of the list has yet to be implemented.

The IWL is also used in the online advertising industry, where several major advertising brokers have signed a joint agreement not to show advertising on these sites. This shuts off part of the revenue streams to pirate sites which, in theory, should make them less profitable.

A similar approach is being taken by major payment providers, who are preventing known pirate sites from processing transactions through their services. Every company has its own measures, but the overlapping goal is to frustrate pirate sites and reduce copyright infringement.

The Disruption Machine

It’s interesting to see that Google is listed as a partner since they don’t support general website blockades. However, Google said that it would demote sites on the IWL in its search results.

While these are all positive developments, according to the anti-piracy group, it’s just the start. RettighedsAlliancen also believes other tools and services could join in. Browser plugins could use the IWL to identify illegal sites, for example, and the options are endless.

“Likewise, large companies, institutions, and public authorities are also well-suited to implement the IWL in their local networks. For example, to prevent students from accessing illegal content while at school or university,” Fredenslund says.

“Looking further ahead, social media platforms such as Facebook are used to a great extent to consume content online and it is therefore obvious that they should also incorporate the IWL in their systems to prevent their users from harm and preventing copyright infringement.”

This model is not completely unique, of course. We’ve seen several elements being implemented in other countries as well, and copyright holders have been pushing voluntary agreements for quite some time now.

What’s new, however, is that it’s clearly defined as a strategy by the Danish group. And by labeling the strategy as a “disruption machine” it already sounds effective, which is part of the job.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Sci-Hub Won’t Be Blocked by US ISPs Anytime Soon

Post Syndicated from Ernesto original https://torrentfreak.com/sci-hub-wont-be-blocked-by-us-isps-anytime-soon-171111/

Sci-Hub, often referred to as the “Pirate Bay of Science,” hasn’t had a particularly good run in US courts so far.

Following a $15 million defeat against Elsevier in June, the American Chemical Society won a default judgment of $4.8 million in copyright damages late last week.

In addition, the publisher was granted an unprecedented injunction, requiring various third-party services to stop providing access to the site.

The order specifically mentions domain registrars and hosting companies, but also search engines and ISPs, although only those who are in “active concert or participation” with the site. This order sparked fears that Google, Comcast, and others would be ordered to take action, but that’s not the case.

After the news broke ACS issued a press release clarifying that it would not go after search engines and ISPs when they are not in “active participation” with Sci-Hub. The problem is that this can be interpreted quite broadly, leaving plenty of room for uncertainty.

Luckily, ACS Director Glenn Ruskin was willing to provide more clarity. He stressed that search engines and ISPs won’t be targeted for simply linking users to Sci-Hub. Companies that host the content are a target though.

“The court’s affirmative ruling does not apply to search engines writ large, but only to those entities who have been in active concert or participation with Sci-Hub, such as websites that host ACS content stolen by Sci-Hub,” Ruskin said.

When we asked whether this means that ISPs such as Comcast are not likely to be targeted, the answer was affirmative.

“That is correct, unless the internet service provider has been in active concert or participation with SciHub. Simply linking to SciHub does not rise to be in active concert or participation,” Ruskin clarified.

The above suggests that ACS will go after domain name registrars, hosting companies, and perhaps Cloudflare, but not any further. Still, even if that’s the case there is cause for worry among several digital rights activists.

The Electronic Frontier Foundation believes that these type of orders set a dangerous precedent. The concept of “active concert or participation” should only cover close associates and co-conspirators, not everyone who provides a service to the defendant. Domain registrars and registries have often been compelled to take action in similar cases, but EFF says this goes too far.

“The courts need to limit who can be bound by orders like this one, to prevent them from being abused,” EFF Senior Staff Attorney Mitch Stoltz informs TorrentFreak.

“In particular, domain name registrars and registries shouldn’t be ordered to help take down a website because of a dispute over the site’s contents. That invites others to use the domain name system as a tool for censorship.”

News of the Sci-Hub injunction has sparked controversy and confusion in recent days, not least because Sci-hub.cc became unavailable soon after. Instead of showing the usual search box, visitors now see a “403 Forbidden” error message. On top of that, the bulletproof Tor version of the site also went offline.

The error message indicates that there’s a hosting issue. While it’s easy to conclude that the court’s injunction has something to do with this, that might not necessarily be the case. Sci-Hub’s hosting company isn’t tied to the US and has a history of protecting sites from takedown efforts.

We reached out to Sci-Hub founder Alexandra Elbakyan for comment but we’re yet to receive a response. The site hasn’t posted any relevant updates on its social media pages either.

That said, the site is far from done. In addition to the Tor domain, Sci-Hub has several other backups in place such as Sci-Hub.io and Sci-Hub.ac, which are up and running as usual.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

US Court Grants ISPs and Search Engine Blockade of Sci-Hub

Post Syndicated from Ernesto original https://torrentfreak.com/us-court-grants-isps-and-search-engine-blockade-of-sci-hub-171106/

Earlier this year the American Chemical Society (ACS), a leading source of academic publications in the field of chemistry, filed a lawsuit against Sci-Hub and its operator Alexandra Elbakyan.

The non-profit organization publishes tens of thousands of articles a year in its peer-reviewed journals. Because many of these are available for free on Sci-Hub, ACS wants to be compensated.

Sci-Hub was made aware of the legal proceedings but did not appear in court. As a result, a default was entered against the site.

In addition to millions of dollars in damages, ACS also requested third-party Internet intermediaries to take action against the site.

The broad request was later adopted in a recommendation from Magistrate Judge John Anderson. This triggered a protest from the tech industry trade group CCIA, which represents global tech firms including Google, Facebook, and Microsoft, that warned against the broad implications. However, this amicus brief was denied.

Just before the weekend, US District Judge Leonie Brinkema issued a final decision which is a clear win for ACS. The publisher was awarded the maximum statutory damages of $4.8 million for 32 infringing works, as well as a permanent injunction.

The injunction is not limited to domain name registrars and hosting companies, but expands to search engines, ISPs and hosting companies too, who can be ordered to stop linking to or offering services to Sci-Hub.

“Ordered that any person or entity in active concert or participation with Defendant Sci-Hub and with notice of the injunction, including any Internet search engines, web hosting and Internet service providers, domain name registrars, and domain name registries, cease facilitating access to any or all domain names and websites through which Sci-Hub engages in unlawful access to, use, reproduction, and distribution of ACS’s trademarks or copyrighted works,” the injunction reads.

part of the injunction

There is a small difference with the recommendation from the Magistrate Judge. Instead of applying the injunction to all persons “in privity” with Sci-Hub, it now applies to those who are “in active concert or participation” with the pirate site.

The injunction means that Internet providers, such as Comcast, can be requested to block users from accessing Sci-Hub. That’s a big deal since pirate site blockades are not common in the United States. The same is true for search engine blocking of copyright-infringing sites.

It’s clear that the affected Internet services will not be happy with the outcome. While the CCIA’s attempt to be heard in the case failed, it’s likely that they will protest the injunction when ACS tries to enforce it.

Previously, Cloudflare objected to a similar injunction where the RIAA argued that it was “in active concert or participation” with the pirate site MP3Skull. Here, Cloudflare countered that the DMCA protects the company from liability for the copyright infringements of its customers, limiting the scope of anti-piracy injunctions.

However, a Florida federal court ruled that the DMCA doesn’t apply in these cases.

It’s likely that ISPs and search engines will lodge similar protests if ACS tries to enforce the injunction against them.

While this case is crucial for copyright holders and Internet services, Sci-Hub itself doesn’t seem too bothered by the blocking prospect or the millions in damages it must pay on paper.

It already owes Elsevier $15 million, which it can’t pay, and a few million more or less doesn’t change anything. Also, the site has a Tor version which can’t be blocked by Internet providers, so determined scientists will still be able to access the site if they want.

The full order is available here (pdf) and a copy of the injunction can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Improved Search for Backblaze’s Blog

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/using-relevannssi-wordpress-search/

Improved Search for Backblaze's Blog
Search has become the most powerful method to find content on the Web, both for finding websites themselves and for discovering information within websites. Our blog readers find content in both ways — using Google, Bing, Yahoo, Ask, DuckDuckGo, and other search engines to follow search results directly to our blog, and using the site search function once on our blog to find content in the blog posts themselves.

There’s a Lot of Great Content on the Backblaze Blog

Backblaze’s CEO Gleb Budman wrote the first post for this blog in March of 2008. Since that post there have been 612 more. There’s a lot of great content on this blog, as evidenced by the more than two million page views we’ve had since the beginning of this year. We typically publish two blog posts per week on a variety of topics, but we focus primarily on cloud storage technology and data backup, company news, and how-to articles on how to use cloud storage and various hardware and software solutions.

Earlier this year we initiated a series of posts on entrepreneurship by our CEO and co-founder, Gleb Budman, which has proven tremendously popular. We also occasionally publish something a little lighter, such as our current Halloween video contest — there’s still time to enter!

Blog search box

The Site Search Box — Your gateway to Backblaze blog content

We Could do a Better Job of Helping You Find It

I joined Backblaze as Content Director in July of this year. During the application process, I spent quite a bit of time reading through the blog to understand the company, the market, and its customers. That’s a lot of reading. I used the site search many times to uncover topics and posts, and discovered that site search had a number of weaknesses that made it less-than-easy to find what I was looking for.

These site search weaknesses included:

Searches were case sensitive
Visitor could easily miss content capitalized differently than the search terms
Results showed no date or author information
Visitor couldn’t tell how recent the post was or who wrote it
Search terms were not highlighted in context
Visitor had to scrutinize the results to find the terms in the post
No indication of the number of results or number of pages of results
Visitor didn’t know how fruitful the search was
No record of search terms used by visitors
We couldn’t tell what our visitors were searching for!

I wanted to make it easier for blog visitors to find all the great content on the Backblaze blog and help me understand what our visitors are searching for. To do that, we needed to upgrade our site search.

I started with a list of goals I wanted for site search.

  1. Make it easier to find content on the blog
  2. Provide a summary of what was found
  3. Search the comments as well as the posts
  4. Highlight the search terms in the results to help find them in context
  5. Provide a record of searches to help me understand what interests our readers

I had the goals, now how could I find a solution to achieve them?

Our blog is built on WordPress, which has a built-in site search function that could be described as simply adequate. The most obvious of its limitations is that search results are listed chronologically, not based on “most popular,” most occurring,” or any other metric that might make the result more relevant to your interests.

The Search for Improved (Site) Search

An obvious choice to improve site search would be to adopt Google Site Search, as many websites and blogs have done. Unfortunately, I quickly discovered that Google is sunsetting Site Search by April of 2018. That left the choice among a number of third-party services or WordPress-specific solutions. My immediate inclination was to see what is available specifically for WordPress.

There are a handful of search plugins for WordPress. One stood out to me for the number of installations (100,000+) and overwhelmingly high reviews: Relevanssi. Still, I had a number of questions. The first question was whether the plugin retained any search data from our site — I wanted to make sure that the privacy of our visitors is maintained, and even harvesting anonymous search data would not be acceptable to Backblaze. I wrote to the developer and was pleased by the responsiveness from Relevanssi’s creator, Mikko Saari. He explained to me that Relevanssi doesn’t have access to any of the search data from the sites using his plugin. Receiving a quick response from a developer is always a good sign. Other signs of a good WordPress plugin are recent updates and an active support forum.

Our solution: Relevanssi for Site Search

The WordPress plugin Relevanssi met all of our criteria, so we installed the plugin and switched to using it for site search in September.

In addition to solving the problems listed above, our search results are now displayed based on relevance instead of date, which is the default behavior of WordPress search. That capability is very useful on our blog where a lot of the content from years ago is still valuable — often called evergreen content. The new site search also enables visitors to search using the boolean expressions AND and OR. For example, a visitor can search for “seagate AND drive,” and see results that only include both words. Alternatively, a visitor can search for “seagate OR drive” and see results that include either word.

screenshot of relevannssi wordpress search results

Search results showing total number of results, hits and their location, and highlighted search terms in context

Visitors can put search terms in quotation marks to search for an entire phrase. For example, a visitor can search for “2016 drive stats” and see results that include only that exact phrase. In addition, the site search results come with a summary, showing where the results were found (title, post, or comments). Search terms are highlighted in yellow in the content, showing exactly where the search result was found.

Here’s an example of a popular post that shows up in searches. Hard Drive Stats for Q1 2017 was published on May 9, 2017. Since September 4, it has shown up over 150 times in site searches and in the last 90 days in has been viewed over 53,000 times on our blog.

Hard Drive Stats for Q1 2017

The Results Tell the Story

Since initiating the new search on our blog on September 4, there have been almost 23,000 site searches conducted, so we know you are using it. We’ve implemented pagination for the blog feed and search results so you know how many pages of results there are and made it easier to navigate to them.

Now that we have this site search data, you likely are wondering which are the most popular search terms on our blog. Here are some of the top searches:

What Do You Search For?

Please tell us how you use site search and whether there are any other capabilities you’d like to see that would make it easier to find content on our blog.

The post Improved Search for Backblaze’s Blog appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

SQLiv – SQL Injection Dork Scanning Tool

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/10/sqliv-sql-injection-dork-scanning-tool/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

SQLiv – SQL Injection Dork Scanning Tool

SQLiv is a Python-based massive SQL Injection dork scanning tool which uses Google, Bing or Yahoo for targetted scanning, multiple-domain scanning or reverse domain scanning.

SQLiv Massive SQL Injection Scanner Features

Both the SQLi scanning and domain info checking are done in a multiprocess manner so the script is super fast at scanning a lot of URLs. It’s a fairly new tool and there are plans for more features and to add support for other search engines like DuckDuckGo.

Read the rest of SQLiv – SQL Injection Dork Scanning Tool now! Only available at Darknet.

Netflix Expands Content Protection Team to Reduce Piracy

Post Syndicated from Ernesto original https://torrentfreak.com/netflix-expands-content-protection-team-to-reduce-piracy-171015/

There is little doubt that, in the United States and many other countries, Netflix has become the standard for watching movies on the Internet.

Despite the widespread availability, however, Netflix originals are widely pirated. Episodes from House of Cards, Narcos, and Orange is the New Black are downloaded and streamed millions of times through unauthorized platforms.

The streaming giant is obviously not happy with this situation and has ramped up its anti-piracy efforts in recent years. Since last year the company has sent out over a million takedown requests to Google alone and this volume continues to expand.

This growth coincides with an expansion of the company’s internal anti-piracy division. A new job posting shows that Netflix is expanding this team with a Copyright and Content Protection Coordinator. The ultimate goal is to reduce piracy to a fringe activity.

“The growing Global Copyright & Content Protection Group is looking to expand its team with the addition of a coordinator,” the job listing reads.

“He or she will be tasked with supporting the Netflix Global Copyright & Content Protection Group in its internal tactical take down efforts with the goal of reducing online piracy to a socially unacceptable fringe activity.”

Among other things, the new coordinator will evaluate new technological solutions to tackle piracy online.

More old-fashioned takedown efforts are also part of the job. This includes monitoring well-known content platforms, search engines and social network sites for pirated content.

“Day to day scanning of Facebook, YouTube, Twitter, Periscope, Google Search, Bing Search, VK, DailyMotion and all other platforms (including live platforms) used for piracy,” is listed as one of the main responsibilities.

Netflix’ Copyright and Content Protection Coordinator Job

The coordinator is further tasked with managing Facebook’s Rights Manager and YouTube’s Content-ID system, to prevent circumvention of these piracy filters. Experience with fingerprinting technologies and other anti-piracy tools will be helpful in this regard.

Netflix doesn’t do all the copyright enforcement on its own though. The company works together with other media giants in the recently launched “Alliance for Creativity and Entertainment” that is spearheaded by the MPAA.

In addition, the company also uses the takedown services of external anti-piracy outfits to target more traditional infringement sources, such as cyberlockers and piracy streaming sites. The coordinator has to keep an eye on these as well.

“Liaise with our vendors on manual takedown requests on linking sites and hosting sites and gathering data on pirate streaming sites, cyberlockers and usenet platforms.”

The above shows that Netflix is doing its best to prevent piracy from getting out of hand. It’s definitely taking the issue more seriously than a few years ago when the company didn’t have much original content.

The switch from being merely a distribution platform to becoming a major content producer and copyright holder has changed the stakes. Netflix hasn’t won the war on piracy, it’s just getting started.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Tech Giants Protest Looming US Pirate Site Blocking Order

Post Syndicated from Ernesto original https://torrentfreak.com/tech-giants-protest-looming-us-pirate-site-blocking-order-171013/

While domain seizures against pirate sites are relatively common in the United states, ISP and search engine blocking is not. This could change soon though.

In an ongoing case against Sci-Hub, regularly referred to as the “Pirate Bay of Science,” a magistrate judge in Virginia recently recommended a broad order which would require search engines and Internet providers to block the site.

The recommendation followed a request from the academic publisher American Chemical Society (ACS) that wants these third-party services to make the site in question inaccessible. While Sci-Hub has chosen not to defend itself, a group of tech giants has now stepped in to prevent the broad injunction from being issued.

This week the Computer & Communications Industry Association (CCIA), which includes members such as Cloudflare, Facebook, and Google, asked the court to limit the proposed measures. In an amicus curiae brief submitted to the Virginia District Court, they share their concerns.

“Here, Plaintiff is seeking—and the Magistrate Judge has recommended—a permanent injunction that would sweep in various Neutral Service Providers, despite their having violated no laws and having no connection to this case,” CCIA writes.

According to the tech companies, neutral service providers are not “in active concert or participation” with the defendant, and should, therefore, be excluded from the proposed order.

While search engines may index Sci-Hub and ISPs pass on packets from this site, they can’t be seen as “confederates” that are working together with them to violate the law, CCIA stresses.

“Plaintiff has failed to make a showing that any such provider had a contract with these Defendants or any direct contact with their activities—much less that all of the providers who would be swept up by the proposed injunction had such a connection.”

Even if one of the third party services could be found liable the matter should be resolved under the DMCA, which expressly prohibits such broad injunctions, the CCIA claims.

“The DMCA thus puts bedrock limits on the injunctions that can be imposed on qualifying providers if they are named as defendants and are held liable as infringers. Plaintiff here ignores that.

“What ACS seeks, in the posture of a permanent injunction against nonparties, goes beyond what Congress was willing to permit, even against service providers against whom an actual judgment of infringement has been entered.That request must be rejected.”

The tech companies hope the court will realize that the injunction recommended by the magistrate judge will set a dangerous precedent, which goes beyond what the law is intended for, so will impose limits in response to their concerns.

It will be interesting to see whether any copyright holder groups will also chime in, to argue the opposite.

CCIA’s full amicus curiae brief is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

SOPA Ghosts Hinder U.S. Pirate Site Blocking Efforts

Post Syndicated from Ernesto original https://torrentfreak.com/sopa-ghosts-hinder-u-s-pirate-site-blocking-efforts-171008/

Website blocking has become one of the entertainment industries’ favorite anti-piracy tools.

All over the world, major movie and music industry players have gone to court demanding that ISPs take action, often with great success.

Internal MPAA research showed that website blockades help to deter piracy and former boss Chris Dodd said that they are one of the most effective anti-tools available.

While not everyone is in agreement on this, the numbers are used to lobby politicians and convince courts. Interestingly, however, nothing is happening in the United States, which is where most pirate site visitors come from.

This is baffling to many people. Why would US-based companies go out of their way to demand ISP blocking in the most exotic locations, but fail to do the same at home?

We posed this question to Neil Turkewitz, RIAA’s former Executive Vice President International, who currently runs his own consulting group.

The main reason why pirate site blocking requests have not yet been made in the United States is down to SOPA. When the proposed SOPA legislation made headlines five years ago there was a massive backlash against website blocking, which isn’t something copyright groups want to reignite.

“The legacy of SOPA is that copyright industries want to avoid resurrecting the ghosts of SOPA past, and principally focus on ways to creatively encourage cooperation with platforms, and to use existing remedies,” Turkewitz tells us.

Instead of taking the likes of Comcast and Verizon to court, the entertainment industries focused on voluntary agreements, such as the now-defunct Copyright Alerts System. However, that doesn’t mean that website blocking and domain seizures are not an option.

“SOPA made ‘website blocking’ as such a four-letter word. But this is actually fairly misleading,” Turkewitz says.

“There have been a variety of civil and criminal actions addressing the conduct of entities subject to US jurisdiction facilitating piracy, regardless of the source, including hundreds of domain seizures by DHS/ICE.”

Indeed, there are plenty of legal options already available to do much of what SOPA promised. ABS-CBN has taken over dozens of pirate site domain names through the US court system. Most recently even through an ex-parte order, meaning that the site owners had no option to defend themselves before they lost their domains.

ISP and search engine blocking is also around the corner. As we reported earlier this week, a Virginia magistrate judge recently recommended an injunction which would require search engines and Internet providers to prevent users from accessing Sci-Hub.

Still, the major movie and music companies are not yet using these tools to take on The Pirate Bay or other major pirate sites. If it’s so easy, then why not? Apparently, SOPA may still be in the back of their minds.

Interestingly, the RIAA’s former top executive wasn’t a fan of SOPA when it was first announced, as it wouldn’t do much to extend the legal remedies that were already available.

“I actually didn’t like SOPA very much since it mostly reflected existing law and maintained a paradigm that didn’t involve ISP’s in creative interdiction, and simply preserved passivity. To see it characterized as ‘copyright gone wild’ was certainly jarring and incongruous,” Turkewitz says.

Ironically, it looks like a bill that failed to pass, and didn’t impress some copyright holders to begin with, is still holding them back after five years. They’re certainly not using all the legal options available to avoid SOPA comparison. The question is, for how long?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Judge Recommends ISP and Search Engine Blocking of Sci-Hub in the US

Post Syndicated from Ernesto original https://torrentfreak.com/judge-recommends-isp-search-engine-blocking-sci-hub-us-171003/

Earlier this year the American Chemical Society (ACS), a leading source of academic publications in the field of chemistry, filed a lawsuit against Sci-Hub and its operator Alexandra Elbakyan.

The non-profit organization publishes tens of thousands of articles a year in its peer-reviewed journals. Because many of these are available for free on Sci-Hub, ACS wants to be compensated.

Sci-Hub was made aware of the legal proceedings but did not appear in court. As a result, a default was entered against the site. In addition to millions of dollars in damages, ACS also requested third-party Internet intermediaries to take action against the site.

While the request is rather unprecedented for the US, as it includes search engine and ISP blocking, Magistrate Judge John Anderson has included these measures in his recommendations.

Judge Anderson agrees that Sci-Hub is guilty of copyright and trademark infringement. In addition to $4,800,000 in statutory damages, he recommends a broad injunction that would require search engines, ISPs, domain registrars and other services to block Sci-Hub’s domain names.

“… the undersigned recommends that it be ordered that any person or entity in privity with Sci-Hub and with notice of the injunction, including any Internet search engines, web hosting and Internet service providers, domain name registrars, and domain name registries, cease facilitating access to any or all domain names and websites through which Sci-Hub engages in unlawful access to, use, reproduction, and distribution of ACS’s trademarks or copyrighted works.”

The recommendation

In addition to the above, domain registries and registrars will also be required to suspend Sci-Hub’s domain names. This also happened previously in a different lawsuit, but Sci-Hub swiftly moved to a new domain at the time.

“Finally, the undersigned recommends that it be ordered that the domain name registries and/or registrars for Sci-Hub’s domain names and websites, or their technical administrators, shall place the domain names on registryHold/serverHold or such other status to render the names/sites non-resolving,” the recommendation adds.”

If the U.S. District Court Judge adopts this recommendation, it would mean that Internet providers such as Comcast could be ordered to block users from accessing Sci-Hub. That’s a big deal since pirate site blockades are not common in the United States.

This would likely trigger a response from affected Internet services, who generally want to avoid being dragged into these cases. They would certainly don’t want such far-reaching measure to be introduced through a default order.

Sci-Hub itself doesn’t seem to be too bothered by the blocking prospect or the millions in damages it faces. The site has a Tor version which can’t be blocked by Internet providers, so determined scientists will still be able to access the site if they want.

Magistrate Judge John Anderson’s full findings of fact and recommendations are available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

20th Century Fox is Looking for Anti-Piracy Interns

Post Syndicated from Ernesto original https://torrentfreak.com/20th-century-fox-is-looking-for-anti-piracy-interns-170930/

Piracy remains one of the key threats for most Hollywood movie studios.

Most companies have entire departments dedicated to spotting infringing content, understanding the changing landscape, and figuring out how to respond.

20th Century Fox, for example, has its own Content Protection group, headed by Ron Wheeler. The group keeps an eye on emerging piracy threats and is currently looking for fresh blood.

The company has listed two new internships. The first is for a Graduate JD Law Student, who will be tasked with analyzing fair use cases and finding new targets for lawsuits, among other things.

“Interns will participate in the monitoring of and enforcement against such piracy, including conducting detailed copyright infringement and fair use analyses; identifying and researching litigation targets, and searching the internet for infringing copies of Fox content.”

Fox notes that basic knowledge of the principles of Copyright Law is a plus, but apparently not required. Students who take this internship will learn how film and television piracy affects the media industry and consumers, preparing them for future work in this field.

“This is a great opportunity for students interested in pursuing practice in the fields of Intellectual Property, Entertainment, or Media Law,” the job application explains.

A second anti-piracy internship that was posted recently is a search and analytics position. This includes organizing online copyright infringement intelligence and compiling this in analytical piracy reports for Fox executives.

Undergraduate – Research & Analytics

The research job posting shows that Fox keeps an eye on a wide range of piracy avenues including search engines, forums, eBay and pirate sites.

“Anti-Piracy Internet Investigations and Analysis including, but not limited to, internet research, forum site investigation, eBay searches, video forensics analysis review, database entry, general internet searches for Fox video content, review and summarize pirate websites, piracy trend analysis, and more.”

Those who complete the internship will have a thorough understanding of how widespread piracy issues are. It will provide insight into how this affects the movie industry and consumers alike, Fox explains.

While the average torrenter and streaming pirate might not be very eager to work for ‘the other side,’ these internships are ideal positions for students who have aspirations of working in the anti-piracy field. If any TorrentFreak readers plan to apply and get the job, we’ll be eager to hear what you’ve learned in a few months.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Russia Blocks 4,000 Pirate Sites Plus 41,000 Innocent as Collateral Damage

Post Syndicated from Andy original https://torrentfreak.com/russia-blocks-4000-pirate-sites-plus-41000-innocent-as-collateral-damage-170905/

After years of criticism from both international and local rightsholders, in 2013 the Russian government decided to get tough on Internet piracy.

Under new legislation, sites engaged in Internet piracy could find themselves blocked by ISPs, rendering them inaccessible to local citizens and solving the piracy problem. Well, that was the theory, at least.

More than four years on, Russia is still grappling with a huge piracy problem that refuses to go away. It has been blocking thousands of sites at a steady rate, including RuTracker, the country’s largest torrent platform, but still the problem persists.

Now, a new report produced by Roskomsvoboda, the Center for the Protection of Digital Rights, and the Pirate Party of Russia, reveals a system that has not only failed to reach its stated aims but is also having a negative effect on the broader Internet.

“It’s already been four years since the creation of this ‘anti-piracy machine’ in Russia. The first amendments related to the fight against ‘piracy’ in the network came into force on August 1, 2013, and since then this mechanism has been twice revised,” Roskomsvoboda said in a statement.

“[These include] the emergence of additional responsibilities to restrict access to network resources and increase the number of subjects who are responsible for removing and blocking content. Since that time, several ‘purely Russian’ trends in ‘anti-piracy’ and trade in rights have also emerged.”

These revisions, which include the permanent blocking of persistently infringing sites and the planned blocking of mirror sites and anonymizers, have been widely documented. However, the researchers say that they want to shine a light on the effects of blocking procedures and subsequent actions that are causing significant issues for third-parties.

As part of the study, the authors collected data on the cases presented to the Moscow City Court by the most active plaintiffs in anti-piracy actions (mainly TV show distributors and music outfits including Sony Music Entertainment and Universal Music). They describe the court process and system overall as lacking.

“The court does not conduct a ‘triple test’ and ignores the position, rights and interests of respondents and third parties. It does not check the availability of illegal information on sites and appeals against decisions of the Moscow City Court do not bring any results,” the researchers write.

“Furthermore, the cancellation of the unlimited blocking of a site is simply impossible and in respect of hosting providers and security services, those web services are charged with all the legal costs of the case.”

The main reason behind this situation is that ‘pirate’ site operators rarely (if ever) turn up to defend themselves. If at some point they are found liable for infringement under the Criminal Code, they can be liable for up to six years in prison, hardly an incentive to enter into a copyright process voluntarily. As a result, hosts and other providers act as respondents.

This means that these third-party companies appear as defendants in the majority of cases, a position they find both “unfair and illogical.” They’re also said to be confused about how they are supposed to fulfill the blocking demands placed upon them by the Court.

“About 90% of court cases take place without the involvement of the site owner, since the requirements are imposed on the hosting provider, who is not responsible for the content of the site,” the report says.

Nevertheless, hosts and other providers have been ordered to block huge numbers of pirate sites.

According to the researchers, the total has now gone beyond 4,000 domains, but the knock on effect is much more expansive. Due to the legal requirement to block sites by both IP address and other means, third-party sites with shared IP addresses get caught up as collateral damage. The report states that more than 41,000 innocent sites have been blocked as the result of supposedly targeted court orders.

But with collateral damage mounting, the main issue as far as copyright holders are concerned is whether piracy is decreasing as a result. The report draws few conclusions on that front but notes that blocks are a blunt instrument. While they may succeed in stopping some people from accessing ‘pirate’ domains, the underlying infringement carries on regardless.

“Blocks create restrictions only for Internet users who are denied access to sites, but do not lead to the removal of illegal information or prevent intellectual property violations,” the researchers add.

With no sign of the system being overhauled to tackle the issues raised in the study (pdf, Russian), Russia is now set to introduce yet new anti-piracy measures.

As recently reported, new laws requiring search engines to remove listings for ‘pirate’ mirror sites comes into effect October 1. Exactly a month later on November 1, VPNs and anonymization tools will have to be removed too, if they fail to meet the standards required under state regulation.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Sci-Hub Faces $4,8 Million Piracy Damages and ISP Blocking

Post Syndicated from Ernesto original https://torrentfreak.com/sci-hub-faces-48-million-piracy-damages-and-isp-blocking-170905/

In June, a New York District Court handed down a default judgment against Sci-Hub.

The pirate site, operated by Alexandra Elbakyan, was ordered to pay $15 million in piracy damages to academic publisher Elsevier.

With the ink on this order barely dry, another publisher soon tagged on with a fresh complaint. The American Chemical Society (ACS), a leading source of academic publications in the field of chemistry, also accused Sci-Hub of mass copyright infringement.

Founded more than 140 years ago, the non-profit organization has around 157,000 members and researchers who publish tens of thousands of articles a year in its peer-reviewed journals. Because many of its works are available for free on Sci-Hub, ACS wants to be compensated.

Sci-Hub was made aware of the legal proceedings but did not appear in court. As a result, a default was entered against the site, and a few days ago ACS specified its demands, which include $4.8 million in piracy damages.

“Here, ACS seeks a judgment against Sci-Hub in the amount of $4,800,000—which is based on infringement of a representative sample of publications containing the ACS Copyrighted Works multiplied by the maximum statutory damages of $150,000 for each publication,” they write.

The publisher notes that the maximum statutory damages are only requested for 32 of its 9,000 registered works. This still adds up to a significant sum of money, of course, but that is needed as a deterrent, ACS claims.

“Sci-Hub’s unabashed flouting of U.S. Copyright laws merits a strong deterrent. This Court has awarded a copyright holder maximum statutory damages where the defendant’s actions were ‘clearly willful’ and maximum damages were necessary to ‘deter similar actors in the future’,” they write.

Although the deterrent effect may sound plausible in most cases, another $4.8 million in debt is unlikely to worry Sci-Hub’s owner, as she can’t pay it off anyway. However, there’s also a broad injunction on the table that may be more of a concern.

The requested injunction prohibits Sci-Hub’s owner to continue her work on the site. In addition, it also bars a wide range of other service providers from assisting others to access it.

Specifically, it restrains “any Internet search engines, web hosting and Internet service providers, domain name registrars, and domain name registries, to cease facilitating access to any or all domain names and websites through which Defendant Sci-Hub engages in unlawful access to [ACS’s works].”

The above suggests that search engines may have to remove the site from their indexes while ISPs could be required to block their users’ access to the site as well, which goes quite far.

Since Sci-Hub is in default, ACS is likely to get what it wants. However, if the organization intends to enforce the order in full, it’s likely that some of these third-party services, including Internet providers, will have to spring into action.

While domain name registries are regularly ordered to suspend domains, search engine removals and ISP blocking are not common in the United States. It would, therefore, be no surprise if this case lingers a little while longer.

A copy of ACS’s proposed default judgment, obtained by TorrentFreak, is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Search Engines Will Open Systems to Prove Piracy & VPN Blocking

Post Syndicated from Andy original https://torrentfreak.com/search-engines-will-open-systems-to-prove-piracy-vpn-blocking-170901/

Over the past several years, Russia has become something of a world leader when it comes to website blocking. Tens of thousands of websites are now blocked in the country on copyright infringement and a wide range of other grounds.

With circumvention technologies such as VPNs, however, Russian citizens are able to access blocked sites, a position that has irritated Russian authorities who are determined to control what information citizens are allowed to access.

After working on new legislation for some time, late July President Vladimir Putin signed a new law which requires local telecoms watchdog Rozcomnadzor to maintain a list of banned domains while identifying sites, services, and software that provide access to them.

Rozcomnadzor is required to contact the operators of such services with a request for them to block banned resources. If they do not, then they themselves will become blocked. In addition, search engines are also required to remove blocked resources from their search results, in order to discourage people from accessing them.

With compliance now a matter of law, attention has turned to how search engines can implement the required mechanisms. This week Roskomnadzor hosted a meeting with representatives of the largest Russian search engines including Yandex, Sputnik, Search Mail.ru, where this topic was top of the agenda.

Since failure to comply can result in a fine of around $12,000 per breach, search companies have a vested interest in the systems working well against not only pirate sites, but also mirrors and anonymization tools that provide access to them.

“During the meeting, a consolidated position on the implementation of new legislative requirements was developed,” Rozcomnadzor reports.

“It was determined that the list of blocked resources to be removed from search results will be transferred to the operators of search engines in an automated process.”

While sending over lists of domains directly to search engines probably isn’t that groundbreaking, Rozcomnadzor wants to ensure that companies like Yandex are also responding to the removal requests properly.

So, instead of simply carrying out test searches itself, it’s been agreed that the watchdog will gain direct access to the search engines’ systems, so that direct verification can take place.

“In addition, preliminary agreements have been reached that the verification of the enforcement of the law by the search engines will be carried out through the interaction of the information systems of Roskomnadzor and the operators of search engines,” Rozcomnadzor reports.

Time for search engines to come into full compliance is ticking away. The law requiring them to remove listings for ‘pirate’ mirror sites comes into effect October 1. Exactly a month later on November 1, VPNs and anonymization tools will have to be removed too, if they fail to meet the standards required under state regulation.

Part of that regulation requires anonymization services to disclose the identities of their owners to the government.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Torrent Sites Suffer DDoS Attacks and Other Trouble

Post Syndicated from Ernesto original https://torrentfreak.com/torrent-sites-suffer-ddos-attacks-and-other-trouble-170901/

It’s not uncommon for torrent sites to suffer downtime due to technical issues. That happens pretty much every day.

But when close to a dozen large sites go offline, people start to ask questions.

This is exactly what happened this week. As reported previously, The Pirate Bay was hard to reach earlier, after a surge of traffic and a subsequent DDoS attack overloaded its servers. And they were not alone.

TorrentFreak spoke to several torrent site admins who noticed an increase of suspicious traffic which slowed down or toppled their sites, at least temporarily. While most have recovered, some sites remain offline today.

TorrentProject.se, one of the most used torrent search engines, has been down for nearly three days now. The site currently shows a “403 Forbidden” error message. Whether this is a harmless technical issue, the result of a DDoS attack, or worse, is unknown.

TorrentFreak reached out to the owner of the site but we have yet to hear back.

403 error

Another site that appears to be in trouble is WorldWideTorrents. This site, which was started after the KAT shutdown last year, is a home to many comic book fans. However, over the past few days the site has become unresponsive.

Based on WHOIS data, the site’s domain name has been suspended. The name servers were changed to “suspended-domain.com,” which means that it’s unlikely to be reinstated. WorldWideTorrents will reportedly return with a new domain but which one is currently unknown.

Popular uploaders on the site such as Nemesis43, meanwhile, are still active on other sites.

WorldWireTorrents Whois

Then there’s also Isohunt.to, which has been unresponsive for over a week. The search engine, which launched in 2013 less than two weeks after isoHunt.com shut down, has now vanished itself.

With no word from the operators, we can only speculate what happened. The site has seen a sharp decline in traffic over the past year, so it could be that they simply lost interest.

Isohunt.to is not responding

Those who now search for IsoHunt on Google are instead pointed to isohunts.to, which is a scam site advising users to download a “binary client,” which is little more than an ad.

The above shows that the torrent ecosystem remains vulnerable. DDoS attacks and domain issues are nothing new, but after the shutdown of KAT, Torrentz, Extratorrent, and other giants, the remaining sites have to carry a larger burden.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Showtime Seeks Injunction to Stop Mayweather v McGregor Piracy

Post Syndicated from Andy original https://torrentfreak.com/showtime-seeks-injunction-to-stop-mayweather-v-mcgregor-piracy-170816/

It’s the fight that few believed would become reality but on August 26, at the T-Mobile Arena in Las Vegas, Floyd Mayweather Jr. will duke it out with UFC lightweight champion Conor McGregor.

Despite being labeled a freak show by boxing purists, it is set to become the biggest combat sports event of all time. Mayweather, undefeated in his professional career, will face brash Irishman McGregor, who has gained a reputation for accepting fights with anyone – as long as there’s a lot of money involved. Big money is definitely the theme of the Mayweather bout.

Dubbed “The Money Fight”, some predict it could pull in a billion dollars, with McGregor pocketing $100m and Mayweather almost certainly more. Many of those lucky enough to gain entrance on the night will have spent thousands on their tickets but for the millions watching around the world….iiiiiiiit’s Showtimmme….with hefty PPV prices attached.

Of course, not everyone will be handing over $89.95 to $99.99 to watch the event officially on Showtime. Large numbers will turn to the many hundreds of websites set to stream the fight for free online, which has the potential to reduce revenues for all involved. With that in mind, Showtime Networks has filed a lawsuit in California which attempts to preemptively tackle this piracy threat.

The suit targets a number of John Does said to be behind a network of dozens of sites planning to stream the fight online for free. Defendant 1, using the alias “Kopa Mayweather”, is allegedly the operator of LiveStreamHDQ, a site that Showtime has grappled with previously.

“Plaintiff has had extensive experience trying to prevent live streaming websites from engaging in the unauthorized reproduction and distribution of Plaintiff’s copyrighted works in the past,” the lawsuit reads.

“In addition to bringing litigation, this experience includes sending cease and desist demands to LiveStreamHDQ in response to its unauthorized live streaming of the record-breaking fight between Floyd Mayweather, Jr. and Manny Pacquiao.”

Showtime says that LiveStreamHDQ is involved in the operations of at least 41 other sites that have been set up to specifically target people seeking to watch the fight without paying. Each site uses a .US ccTLD domain name.

Sample of the sites targeted by the lawsuit

Showtime informs the court that the registrant email and IP addresses of the domains overlap, which provides further proof that they’re all part of the same operation. The TV network also highlights various statements on the sites in question which demonstrate intent to show the fight without permission, including the highly dubious “Watch From Here Mayweather vs Mcgregor Live with 4k Display.”

In addition, the lawsuit is highly critical of efforts by the sites’ operator(s) to stuff the pages with fight-related keywords in order to draw in as much search engine traffic as they can.

“Plaintiff alleges that Defendants have engaged in such keyword stuffing as a form of search engine optimization in an effort to attract as much web traffic as possible in the form of Internet users searching for a way to access a live stream of the Fight,” it reads.

While site operators are expected to engage in such behavior, Showtime says that these SEO efforts have been particularly successful, obtaining high-ranking positions in major search engines for the would-be pirate sites.

For instance, Showtime says that a Google search for “Mayweather McGregor Live” results in four of the target websites appearing in the first 100 results, i.e the first 10 pages. Interestingly, however, to get that result searchers would need to put the search in quotes as shown above, since a plain search fails to turn anything up in hundreds of results.

At this stage, the important thing to note is that none of the sites are currently carrying links to the fight, because the fight is yet to happen. Nevertheless, Showtime is convinced that come fight night, all of the target websites will be populated with pirate links, accessible for free or after paying a fee. This needs to be stopped, it argues.

“Defendants’ anticipated unlawful distribution will impair the marketability and profitability of the Coverage, and interfere with Plaintiff’s own authorized distribution of the Coverage, because Defendants will provide consumers with an opportunity to view the Coverage in its entirety for free, rather than paying for the Coverage provided through Plaintiff’s authorized channels.

“This is especially true where, as here, the work at issue is live coverage of a one-time live sporting event whose outcome is unknown,” the network writes.

Showtime informs the court that it made efforts to contact the sites in question but had just a single response from an individual who claimed to be sports blogger who doesn’t offer streaming services. The undertone is one of disbelief.

In closing, Showtime demands a temporary restraining order, preliminary injunction, and permanent injunction, prohibiting the defendants from making the fight available in any way, and/or “forming new entities” in order to circumvent any subsequent court order. Compensation for suspected damages is also requested.

Showtime previously applied for and obtained a similar injunction to cover the (hugely disappointing) Mayweather v Pacquiao fight in 2015. In that case, websites were ordered to be taken down on the day before the fight.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Court Won’t Drop Case Against Alleged KickassTorrents Owner

Post Syndicated from Ernesto original https://torrentfreak.com/court-wont-drop-case-against-alleged-kickasstorrents-owner-170804/

kickasstorrents_500x500Last summer, Polish law enforcement officers arrested Artem Vaulin, the alleged founder of KickassTorrents.

Polish authorities acted on a criminal complaint from the US Government, which accused Vaulin of criminal copyright infringement and money laundering.

While Vaulin is still awaiting the final decision in his extradition process in Poland, his US counsel tried to have the entire case thrown out with a motion to dismiss submitted to the Illinois District Court late last year.

One of the fundamental flaws of the case, according to the defense, is that torrent files themselves are not copyrighted content. In addition, they argued that any secondary copyright infringement claims would fail as these are non-existent under criminal law.

After a series of hearings and a long wait afterwards, US District Judge John Z. Lee has now issued his verdict (pdf).

In a 28-page memorandum and order, the motion to dismiss was denied on various grounds.

The court doesn’t contest that torrent files themselves are not protected content under copyright law. However, this argument ignores the fact that the files are used to download copyrighted material, the order reads.

“This argument, however, misunderstands the indictment. The indictment is not concerned with the mere downloading or distribution of torrent files,” Judge Lee writes.

“Granted, the indictment describes these files and charges Vaulin with operating a website dedicated to hosting and distributing them. But the protected content alleged to have been infringed in the indictment is a number of movies and other copyright protected media that users of Vaulin’s network purportedly downloaded and distributed..,” he adds.

In addition, the defense’s argument that secondary copyright infringement claims are non-existent under criminal law doesn’t hold either, according to the Judge’s decision.

Vaulin’s defense noted that the Government’s theory could expose other search engines, such as Google, to criminal liability. While this is theoretically possible, the court sees distinct differences and doesn’t aim to rule on all search engines in general.

“For present purposes, though, the Court need not decide whether and when a search engine operator might engage in conduct sufficient to constitute aiding and abetting criminal copyright infringement. The issue here is whether 18 U.S.C. § 2 applies to 17 U.S.C. § 506. The Court is persuaded that it does,” Judge Lee writes.

Based on these and other conclusions, the motion to dismiss was denied. This means that the case will move forward. The next step will be to see how the Polish court rules on the extradition request.

Vaulin’s lead counsel Ira Rothken is disappointed with the outcome. He stresses that while courts commonly construe indictments in a light most favorable to the government, it went too far in this case.

“Currently a person merely ‘making available’ a file on a network in California wouldn’t even be committing a civil copyright infringement under the ruling in Napster but under today’s ruling that same person doing it in Illinois could be criminally prosecuted by the United States,” Rothken informs TorrentFreak.

“If federal judges disagree on the state of the federal copyright law then people shouldn’t be criminally prosecuted absent clarification by Congress,” he adds.

The defense team is still considering the best options for appeal, and whether they want to go down that road. However, Rothken hopes that the Seventh Circuit Court of Appeals will address the issue in the future.

“We hope one day that the Seventh Circuit Court of Appeals will undo this ruling and the chilling effect it will have on internet search engines, user generated content sites, and millions of netizens globally,” Rothken notes.

For now, however, Vaulin’s legal team will likely shift its focus to preventing his extradition to the United States.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.