Tag Archives: auth

Security updates for Thursday

Post Syndicated from jake original https://lwn.net/Articles/731309/rss

Security updates have been issued by CentOS (git), Debian (firefox-esr and mariadb-10.0), Gentoo (bind and tnef), Mageia (kauth, kdelibs4, poppler, subversion, and vim), openSUSE (fossil, git, libheimdal, libxml2, minicom, nodejs4, nodejs6, openjpeg2, openldap2, potrace, subversion, and taglib), Oracle (git and kernel), Red Hat (git, groovy, httpd24-httpd, and mercurial), Scientific Linux (git), and SUSE (freeradius-server, ImageMagick, and subversion).

Showtime Seeks Injunction to Stop Mayweather v McGregor Piracy

Post Syndicated from Andy original https://torrentfreak.com/showtime-seeks-injunction-to-stop-mayweather-v-mcgregor-piracy-170816/

It’s the fight that few believed would become reality but on August 26, at the T-Mobile Arena in Las Vegas, Floyd Mayweather Jr. will duke it out with UFC lightweight champion Conor McGregor.

Despite being labeled a freak show by boxing purists, it is set to become the biggest combat sports event of all time. Mayweather, undefeated in his professional career, will face brash Irishman McGregor, who has gained a reputation for accepting fights with anyone – as long as there’s a lot of money involved. Big money is definitely the theme of the Mayweather bout.

Dubbed “The Money Fight”, some predict it could pull in a billion dollars, with McGregor pocketing $100m and Mayweather almost certainly more. Many of those lucky enough to gain entrance on the night will have spent thousands on their tickets but for the millions watching around the world….iiiiiiiit’s Showtimmme….with hefty PPV prices attached.

Of course, not everyone will be handing over $89.95 to $99.99 to watch the event officially on Showtime. Large numbers will turn to the many hundreds of websites set to stream the fight for free online, which has the potential to reduce revenues for all involved. With that in mind, Showtime Networks has filed a lawsuit in California which attempts to preemptively tackle this piracy threat.

The suit targets a number of John Does said to be behind a network of dozens of sites planning to stream the fight online for free. Defendant 1, using the alias “Kopa Mayweather”, is allegedly the operator of LiveStreamHDQ, a site that Showtime has grappled with previously.

“Plaintiff has had extensive experience trying to prevent live streaming websites from engaging in the unauthorized reproduction and distribution of Plaintiff’s copyrighted works in the past,” the lawsuit reads.

“In addition to bringing litigation, this experience includes sending cease and desist demands to LiveStreamHDQ in response to its unauthorized live streaming of the record-breaking fight between Floyd Mayweather, Jr. and Manny Pacquiao.”

Showtime says that LiveStreamHDQ is involved in the operations of at least 41 other sites that have been set up to specifically target people seeking to watch the fight without paying. Each site uses a .US ccTLD domain name.

Sample of the sites targeted by the lawsuit

Showtime informs the court that the registrant email and IP addresses of the domains overlap, which provides further proof that they’re all part of the same operation. The TV network also highlights various statements on the sites in question which demonstrate intent to show the fight without permission, including the highly dubious “Watch From Here Mayweather vs Mcgregor Live with 4k Display.”

In addition, the lawsuit is highly critical of efforts by the sites’ operator(s) to stuff the pages with fight-related keywords in order to draw in as much search engine traffic as they can.

“Plaintiff alleges that Defendants have engaged in such keyword stuffing as a form of search engine optimization in an effort to attract as much web traffic as possible in the form of Internet users searching for a way to access a live stream of the Fight,” it reads.

While site operators are expected to engage in such behavior, Showtime says that these SEO efforts have been particularly successful, obtaining high-ranking positions in major search engines for the would-be pirate sites.

For instance, Showtime says that a Google search for “Mayweather McGregor Live” results in four of the target websites appearing in the first 100 results, i.e the first 10 pages. Interestingly, however, to get that result searchers would need to put the search in quotes as shown above, since a plain search fails to turn anything up in hundreds of results.

At this stage, the important thing to note is that none of the sites are currently carrying links to the fight, because the fight is yet to happen. Nevertheless, Showtime is convinced that come fight night, all of the target websites will be populated with pirate links, accessible for free or after paying a fee. This needs to be stopped, it argues.

“Defendants’ anticipated unlawful distribution will impair the marketability and profitability of the Coverage, and interfere with Plaintiff’s own authorized distribution of the Coverage, because Defendants will provide consumers with an opportunity to view the Coverage in its entirety for free, rather than paying for the Coverage provided through Plaintiff’s authorized channels.

“This is especially true where, as here, the work at issue is live coverage of a one-time live sporting event whose outcome is unknown,” the network writes.

Showtime informs the court that it made efforts to contact the sites in question but had just a single response from an individual who claimed to be sports blogger who doesn’t offer streaming services. The undertone is one of disbelief.

In closing, Showtime demands a temporary restraining order, preliminary injunction, and permanent injunction, prohibiting the defendants from making the fight available in any way, and/or “forming new entities” in order to circumvent any subsequent court order. Compensation for suspected damages is also requested.

Showtime previously applied for and obtained a similar injunction to cover the (hugely disappointing) Mayweather v Pacquiao fight in 2015. In that case, websites were ordered to be taken down on the day before the fight.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Spinrilla Refuses to Share Its Source Code With the RIAA

Post Syndicated from Ernesto original https://torrentfreak.com/spinrilla-refuses-to-share-its-source-code-with-the-riaa-170815/

Earlier this year, a group of well-known labels targeted Spinrilla, a popular hip-hop mixtape site and accompanying app with millions of users.

The coalition of record labels including Sony Music, Warner Bros. Records, and Universal Music Group, filed a lawsuit accusing the service of alleged copyright infringements.

Both sides have started the discovery process and recently asked the court to rule on several unresolved matters. The parties begin with their statements of facts, clearly from opposite angles.

The RIAA remains confident that the mixtape site is ripping off music creators and wants its operators to be held accountable.

“Since Spinrilla launched, Defendants have facilitated millions of unauthorized downloads and streams of thousands of Plaintiffs’ sound recordings without Plaintiffs’ permission,” RIAA writes, complaining about “rampant” infringement on the site.

However, Spinrilla itself believes that the claims are overblown. The company points out that the RIAA’s complaint only lists a tiny fraction of all the songs uploaded by its users. These somehow slipped through its Audible Magic anti-piracy filter.

Where the RIAA paints a picture of rampant copyright infringement, the mixtape site stresses that the record labels are complaining about less than 0.001% of all the tracks they ever published.

“From 2013 to the present, Spinrilla users have uploaded about 1 million songs to Spinrilla’s servers and Spinrilla published about 850,000 of those. Plaintiffs are complaining that 210 of those songs are owned by them and published on Spinrilla without permission,” Spinrilla’s lawyers write.

“That means that Plaintiffs make no claim to 99.9998% of the songs on Spinrilla. Plaintiffs’ shouting of ‘rampant infringement on Spinrilla’, an accusation that Spinrilla was designed to allow easy and open access to infringing material, and assertion that ‘Defendants have facilitated millions of unauthorized downloads’ of those 210 songs is untrue – it is nothing more than a wish and a dream.”

The company reiterates that it’s a platform for independent musicians and that it doesn’t want to feature the Eminem’s and Bieber’s of this world, especially not without permission.

As for the discovery process, there are still several outstanding issues they need the Court’s advice on. Spinrilla has thus far produced 12,000 pages of documents and answered all RIAA interrogatories, but refuses to hand over certain information, including its source code.

According to Spinrilla, there is no reason for the RIAA to have access to its “crown jewel.”

“The source code is the crown jewel of any software based business, including Spinrilla. Even worse, Plaintiffs want an ‘executable’ version of Spinrilla’s source code, which would literally enable them to replicate Spinrilla’s entire website. Any Plaintiff could, in hours, delete all references to ‘Spinrilla,’ add its own brand and launch Spinrilla’s exact website.

“If we sued YouTube for hosting 210 infringing videos, would I be entitled to the source code for YouTube? There is simply no justification for Spinrilla sharing its source code with Plaintiffs,” Spinrilla adds.

The RIAA, on the other hand, argues that the source code will provide insight into several critical issues, including Spinrilla’s knowledge about infringing activity and its ability to terminate repeat copyright infringers.

In addition to the source code, the RIAA has also requested detailed information about the site’s users, including their download and streaming history. This request is too broad, the mixtape site argues, and has offered to provide information on the uploaders of the 210 infringing tracks instead.

It’s clear that the RIAA and Spinrilla disagree on various fronts and it will be up to the court to decide what information must be handed over. So far, however, the language used clearly shows that both parties are far from reaching some kind of compromise.

The first joint discovery statement is available in full here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Roku Gets Tough on Pirate Channels, Warns Users

Post Syndicated from Ernesto original https://torrentfreak.com/roku-gets-tough-on-pirate-channels-warns-users-170815/

In recent years it has become much easier to stream movies and TV-shows over the Internet.

Legal services such as Netflix and HBO are flourishing, but there’s also a darker side to this streaming epidemic. Millions of people are streaming from unauthorized sources, often paired with perfectly legal streaming platforms and devices.

Hollywood insiders have dubbed this trend “Piracy 3.0” are actively working with stakeholders to address the threat. One of the companies rightsholders are working with is Roku, known for its easy-to-use media players.

Earlier this year Roku was harshly confronted with this new piracy crackdown when a Mexican court ordered local retailers to take its media player off the shelves. While this legal battle isn’t over yet, it was clear to Roku that misuse of its platform wasn’t without consequences.

While Roku never permitted any infringing content, it appears that the company has recently made some adjustments to better deal with the problem, or at least clarify its stance.

Pirate content generally doesn’t show up in the official Roku Channel Store but is directly loaded onto the device through third-party “private” channels. A few weeks ago, Roku renamed these “private” channels to “non-certified” channels, while making it very clear that copyright infringement is not allowed.

A “WARNING!” message that pops up during the installation of these third-party channels stresses that Roku has no control over the content. In addition, the company notes that these channels may be removed if it links to copyright infringing content.

Roku Warning

“By continuing, you acknowledge you are accessing a non-certified channel that may include content that is offensive or inappropriate for some audiences,” Roku’s warning reads.

“Moreover, if Roku determines that this channel violates copyright, contains illegal content, or otherwise violates Roku’s terms and conditions, then ROKU MAY REMOVE THIS CHANNEL WITHOUT PRIOR NOTICE.”

TorrentFreak reached out to Roku to find out how they plan to enforce this policy, but we have yet to hear back. According to Cord Cutters News, several piracy channels have already been removed recently, with other developers opting to leave the platform.

Roku’s General Counsel Steve Kay previously informed us that the company is taking the piracy problem seriously. Together with various stakeholders, they are working hard to address the problem.

“We actively work to prevent third-parties from using our platform to distribute copyright infringing content. Moreover, we have been actively working with other industry stakeholders on a wide range of anti-piracy initiatives,” Kay said.

Roku is not the only platform dealing with the piracy epidemic, the popular media player software Kodi is in the same boat. Kodi has also taken an active anti-piracy stance but they’re not banning any add-ons. They believe it would be pointless due to the open source nature of their software.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Why that "file-copy" forensics of DNC hack is wrong

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/08/why-that-file-copy-forensics-of-dnc.html

People keep asking me about this story about how forensics “experts” have found proof the DNC hack was an inside job, because files were copied at 22-megabytes-per-second, faster than is reasonable for Internet connections.

This story is bogus.
Yes, the forensics is correct that at some point, files were copied at 22-mBps. But there’s no evidence this was the point at Internet transfer out of the DNC.
One point might from one computer to another within the DNC. Indeed, as someone experienced doing this sort of hack, it’s almost certain that at some point, such a copy happened. The computers you are able to hack into are rarely the computers that have the data you want. Instead, you have to copy the data from other computers to the hacked computer, and then exfiltrate the data out of the hacked computer.
Another point might have been from one computer to another within the hacker’s own network, after the data was stolen. As a hacker, I can tell you that I frequently do this. Indeed, as this story points out, the timestamps of the file shows that the 22-mBps copy happened months after the hack was detected.
If the 22-mBps was the copy exfiltrating data, it might not have been from inside the DNC building, but from some cloud service, as this tweet points out. Hackers usually have “staging” servers in the cloud that can talk to other cloud serves at easily 10 times the 22-mBps, even around the world. I have staging servers that will do this, and indeed, have copied files at this data rate. If the DNC had that data or backups in the cloud, this would explain it. 
My point is that while the forensic data-point is good, there’s just a zillion ways of explaining it. It’s silly to insist on only the one explanation that fits your pet theory.
As a side note, you can tell this already from the way the story is told. For example, rather than explain the evidence and let it stand on its own, the stories hype the credentials of those who believe the story, using the “appeal to authority” fallacy.

Game of Thrones Pirates Arrested For Leaking Episode Early

Post Syndicated from Andy original https://torrentfreak.com/game-of-thrones-pirates-arrested-for-leaking-episode-early-170814/

Over the past several years, Game of Thrones has become synonymous with fantastic drama and story telling on the one hand, and Internet piracy on the other. It’s the most pirated TV show in history, hands down.

With the new season well underway, another GoT drama began to unfold early August when the then-unaired episode “The Spoils of War” began to circulate on various file-sharing and streaming sites. The leak only trumped the official release by a few days, but that didn’t stop people downloading in droves.

As previously reported, the leaked episode stated that it was “For Internal Viewing Only” at the top of the screen and on the bottom right sported a “Star India Pvt Ltd” watermark. The company commented shortly after.

“We take this breach very seriously and have immediately initiated forensic investigations at our and the technology partner’s end to swiftly determine the cause. This is a grave issue and we are taking appropriate legal remedial action,” a spokesperson said.

Now, just ten days later, that investigation has already netted its first victims. Four people have reportedly been arrested in India for leaking the episode before it aired.

“We investigated the case and have arrested four individuals for unauthorized publication of the fourth episode from season seven,” Deputy Commissioner of Police Akbar Pathan told AFP.

The report indicates that a complaint was filed by a Mumbai-based company that was responsible for storing and processing the TV episodes for an app. It has been named locally as Prime Focus Technologies, which markets itself as a Netflix “Preferred Vendor”.

It’s claimed that at least some of the men had access to login credentials for Game of Thrones episodes which were then abused for the purposes of leaking.

Local media identified the men as Bhaskar Joshi, Alok Sharma and Abhishek Ghadiyal, who were employed by Prime Focus, and Mohamad Suhail, a former employee, who was responsible for leaking the episode onto the Internet.

All of the men were based in Bangalore and were interrogated “throughout the night” at their workplace on August 11. Star India welcomed the arrests and thanked the authorities for their swift action.

“We are deeply grateful to the police for their swift and prompt action. We believe that valuable intellectual property is a critical part of the development of the creative industry and strict enforcement of the law is essential to protecting it,” the company said in a statement.

“We at Star India and Novi Digital Entertainment Private Limited stand committed and ready to help the law enforcement agencies with any technical assistance and help they may require in taking the investigation to its logical conclusion.”

The men will be held in custody until August 21 while investigations continue.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS CloudHSM Update – Cost Effective Hardware Key Management at Cloud Scale for Sensitive & Regulated Workloads

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-cloudhsm-update-cost-effective-hardware-key-management/

Our customers run an incredible variety of mission-critical workloads on AWS, many of which process and store sensitive data. As detailed in our Overview of Security Processes document, AWS customers have access to an ever-growing set of options for encrypting and protecting this data. For example, Amazon Relational Database Service (RDS) supports encryption of data at rest and in transit, with options tailored for each supported database engine (MySQL, SQL Server, Oracle, MariaDB, PostgreSQL, and Aurora).

Many customers use AWS Key Management Service (KMS) to centralize their key management, with others taking advantage of the hardware-based key management, encryption, and decryption provided by AWS CloudHSM to meet stringent security and compliance requirements for their most sensitive data and regulated workloads (you can read my post, AWS CloudHSM – Secure Key Storage and Cryptographic Operations, to learn more about Hardware Security Modules, also known as HSMs).

Major CloudHSM Update
Today, building on what we have learned from our first-generation product, we are making a major update to CloudHSM, with a set of improvements designed to make the benefits of hardware-based key management available to a much wider audience while reducing the need for specialized operating expertise. Here’s a summary of the improvements:

Pay As You Go – CloudHSM is now offered under a pay-as-you-go model that is simpler and more cost-effective, with no up-front fees.

Fully Managed – CloudHSM is now a scalable managed service; provisioning, patching, high availability, and backups are all built-in and taken care of for you. Scheduled backups extract an encrypted image of your HSM from the hardware (using keys that only the HSM hardware itself knows) that can be restored only to identical HSM hardware owned by AWS. For durability, those backups are stored in Amazon Simple Storage Service (S3), and for an additional layer of security, encrypted again with server-side S3 encryption using an AWS KMS master key.

Open & Compatible  – CloudHSM is open and standards-compliant, with support for multiple APIs, programming languages, and cryptography extensions such as PKCS #11, Java Cryptography Extension (JCE), and Microsoft CryptoNG (CNG). The open nature of CloudHSM gives you more control and simplifies the process of moving keys (in encrypted form) from one CloudHSM to another, and also allows migration to and from other commercially available HSMs.

More Secure – CloudHSM Classic (the original model) supports the generation and use of keys that comply with FIPS 140-2 Level 2. We’re stepping that up a notch today with support for FIPS 140-2 Level 3, with security mechanisms that are designed to detect and respond to physical attempts to access or modify the HSM. Your keys are protected with exclusive, single-tenant access to tamper-resistant HSMs that appear within your Virtual Private Clouds (VPCs). CloudHSM supports quorum authentication for critical administrative and key management functions. This feature allows you to define a list of N possible identities that can access the functions, and then require at least M of them to authorize the action. It also supports multi-factor authentication using tokens that you provide.

AWS-Native – The updated CloudHSM is an integral part of AWS and plays well with other tools and services. You can create and manage a cluster of HSMs using the AWS Management Console, AWS Command Line Interface (CLI), or API calls.

Diving In
You can create CloudHSM clusters that contain 1 to 32 HSMs, each in a separate Availability Zone in a particular AWS Region. Spreading HSMs across AZs gives you high availability (including built-in load balancing); adding more HSMs gives you additional throughput. The HSMs within a cluster are kept in sync: performing a task or operation on one HSM in a cluster automatically updates the others. Each HSM in a cluster has its own Elastic Network Interface (ENI).

All interaction with an HSM takes place via the AWS CloudHSM client. It runs on an EC2 instance and uses certificate-based mutual authentication to create secure (TLS) connections to the HSMs.

At the hardware level, each HSM includes hardware-enforced isolation of crypto operations and key storage. Each customer HSM runs on dedicated processor cores.

Setting Up a Cluster
Let’s set up a cluster using the CloudHSM Console:

I click on Create cluster to get started, select my desired VPC and the subnets within it (I can also create a new VPC and/or subnets if needed):

Then I review my settings and click on Create:

After a few minutes, my cluster exists, but is uninitialized:

Initialization simply means retrieving a certificate signing request (the Cluster CSR):

And then creating a private key and using it to sign the request (these commands were copied from the Initialize Cluster docs and I have omitted the output. Note that ID identifies the cluster):

$ openssl genrsa -out CustomerRoot.key 2048
$ openssl req -new -x509 -days 365 -key CustomerRoot.key -out CustomerRoot.crt
$ openssl x509 -req -days 365 -in ID_ClusterCsr.csr   \
                              -CA CustomerRoot.crt    \
                              -CAkey CustomerRoot.key \
                              -CAcreateserial         \
                              -out ID_CustomerHsmCertificate.crt

The next step is to apply the signed certificate to the cluster using the console or the CLI. After this has been done, the cluster can be activated by changing the password for the HSM’s administrative user, otherwise known as the Crypto Officer (CO).

Once the cluster has been created, initialized and activated, it can be used to protect data. Applications can use the APIs in AWS CloudHSM SDKs to manage keys, encrypt & decrypt objects, and more. The SDKs provide access to the CloudHSM client (running on the same instance as the application). The client, in turn, connects to the cluster across an encrypted connection.

Available Today
The new HSM is available today in the US East (Northern Virginia), US West (Oregon), US East (Ohio), and EU (Ireland) Regions, with more in the works. Pricing starts at $1.45 per HSM per hour.

Jeff;

AWS Config Update – New Managed Rules to Secure S3 Buckets

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-config-update-new-managed-rules-to-secure-s3-buckets/

AWS Config captures the state of your AWS resources and the relationships between them. Among other features, it allows you to select a resource and then view a timeline of configuration changes that affect the resource (read Track AWS Resource Relationships With AWS Config to learn more).

AWS Config rules extends Config with a powerful rule system, with support for a “managed” collection of AWS rules as well as custom rules that you write yourself (my blog post, AWS Config Rules – Dynamic Compliance Checking for Cloud Resources, contains more info). The rules (AWS Lambda functions) represent the ideal (properly configured and compliant) state of your AWS resources. The appropriate functions are invoked when a configuration change is detected and check to ensure compliance.

You already have access to about three dozen managed rules. For example, here are some of the rules that check your EC2 instances and related resources:

Two New Rules
Today we are adding two new managed rules that will help you to secure your S3 buckets. You can enable these rules with a single click. The new rules are:

s3-bucket-public-write-prohibited – Automatically identifies buckets that allow global write access. There’s rarely a reason to create this configuration intentionally since it allows
unauthorized users to add malicious content to buckets and to delete (by overwriting) existing content. The rule checks all of the buckets in the account.

s3-bucket-public-read-prohibited – Automatically identifies buckets that allow global read access. This will flag content that is publicly available, including web sites and documentation. This rule also checks all buckets in the account.

Like the existing rules, the new rules can be run on a schedule or in response to changes detected by Config. You can see the compliance status of all of your rules at a glance:

Each evaluation runs in a matter of milliseconds; scanning an account with 100 buckets will take less than a minute. Behind the scenes, the rules are evaluated by a reasoning engine that uses some leading-edge constraint solving techniques that can, in many cases, address NP-complete problems in polynomial time (we did not resolve P versus NP; that would be far bigger news). This work is part of a larger effort within AWS, some of which is described in a AWS re:Invent presentation: Automated Formal Reasoning About AWS Systems:

Now Available
The new rules are available now and you can start using them today. Like the other rules, they are priced at $2 per rule per month.

Jeff;

AWS Migration Hub – Plan & Track Enterprise Application Migration

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-migration-hub-plan-track-enterprise-application-migration/

About once a week, I speak to current and potential AWS customers in our Seattle Executive Briefing Center. While I generally focus on our innovation process, we sometimes discuss other topics, including application migration. When enterprises decide to migrate their application portfolios they want to do it in a structured, orderly fashion. These portfolios typically consist of hundreds of complex Windows and Linux applications, relational databases, and more. Customers find themselves eager yet uncertain as to how to proceed. After spending time working with these customers, we have learned that their challenges generally fall in to three major categories:

Discovery – They want to make sure that they have a deep and complete understanding of all of the moving parts that power each application.

Server & Database Migration – They need to transfer on-premises workloads and database tables to the cloud.

Tracking / Management – With large application portfolios and multiple migrations happening in parallel, they need to track and manage progress in an application-centric fashion.

Over the last couple of years we have launched a set of tools that address the first two challenges. The AWS Application Discovery Service automates the process of discovering and collecting system information, the AWS Server Migration Service takes care of moving workloads to the cloud, and the AWS Database Migration Service moves relational databases, NoSQL databases, and data warehouses with minimal downtime. Partners like Racemi and CloudEndure also offer migration tools of their own.

New AWS Migration Hub
Today we are bringing this collection of AWS and partner migration tools together in the AWS Migration Hub. The hub provides access to the tools that I mentioned above, guides you through the migration process, and tracks the status of each migration, all in accord with the methodology and tenets described in our Migration Acceleration Program (MAP).

Here’s the main screen. It outlines the migration process (discovery, migration, and tracking):

Clicking on Start discovery reveals the flow of the migration process:

It is also possible to skip the Discovery step and begin the migration immediately:

The Servers list is populated using data from an AWS migration service (Server Migration Service or Database Migration Service), partner tools, or using data collected by the AWS Application Discovery Service:

I can on Group as application to create my first application:

Once I identify some applications to migrate, I can track them in the Migrations section of the Hub:

The migration tools, if authorized, automatically send status updates and results back to Migration Hub, for display on the migration status page for the application. Here you can see that Racemi DynaCenter and CloudEndure Migration have played their parts in the migration:

I can track the status of my migrations by checking the Migration Hub Dashboard:

Migration Hub works with migration tools from AWS and our Migration Partners; see the list of integrated partner tools to learn more:

Available Now
AWS Migration Hub can manage migrations in any AWS Region that has the necessary migration tools available; the hub itself runs in the US West (Oregon) Region. There is no charge for the Hub; you pay only for the AWS services that you consume in the course of the migration.

If you are ready to begin your migration to the cloud and are in need of some assistance, please take advantage of the services offered by our Migration Acceleration Partners. These organizations have earned their migration competency by repeatedly demonstrating their ability to deliver large-scale migration.

Jeff;

Popcorn Time Devs Help Streaming Aggregator Reelgood to ‘Fix Piracy’

Post Syndicated from Ernesto original https://torrentfreak.com/popcorn-time-devs-help-streaming-aggregator-reelgood-to-fix-piracy-170812/

During the fall of 2015, the MPAA shut down one of the most prominent pirate streaming services, Popcorn Time fork PopcornTime.io.

While the service was found to be clearly infringing, many of the developers didn’t set out to break the law. Most of all, they wanted to provide the public with easy access to their favorite movies and TV-shows.

Fast forward nearly two years and several of these Popcorn Time developers are still on the same quest. The main difference is that they now operate on the safe side of the law.

The startup they’re working with is called Reelgood, which can be best described as a streaming service aggregator. The San-Francisco based company, founded by ex-Facebook employee David Sanderson, recently raised $3.5 million and has opened its doors to the public.

The goal of Reelgood is similar to Popcorn Time in the way that it aims to be the go-to tool for people to access their entertainment. Instead of using pirate sources, however, Reelgood stitches together content from various legal platforms, both paid and free.

Reelgood

TorrentFreak spoke to former Popcorn Time developer Luigi Poole, who’s leading the charge on the development of Reelgood’s web app. He stresses that the increasing fragmentation of streaming services, which drives some people to pirate sites, is one of the problems Reelgood hopes to fix.

“There’s a misconception that torrenting is done by bad people who don’t want to pay for content. I’d say, in the vast majority of cases, torrenting is a symptom of the massive fragmentation that’s been given as the only legal option to the consumer,” Poole says.

While people have many reasons to pirate, some stick to unauthorized services because it’s simply too cumbersome to dig through all the legal options. Pirate sites have a single interface to all popular movies and TV-shows and legal platforms don’t.

“The modern TV/movie ecosystem is made up of an increasing number of different services. This makes finding content like changing channels, only more complicated. Is that movie you’re about to buy or rent on a service you already pay for? Right now there’s no way to do this other than a cumbersome search using each service’s individual search. Time to go digging,” Poole says.

“We believe this is the main reason people torrent — it’s just easier, given that the legal options presented to us are essentially a ‘go fetch’ treasure hunt,” he adds.

Flipping that channel on an old school television often beats the online streaming experience. That is, for those who want more than Netflix alone.

And the problem isn’t going away anytime soon. As we reported earlier this week, there’s a trend towards more fragmentation, instead of less. Disney is pulling some of its most popular content from the US Netflix in 2019, keeping piracy relevant.

“The untold story is that consumers are throwing up their hands with all this fragmentation, and turning to torrenting not because it’s free, but because it’s intuitive and easy,” Poole says.

“Reelgood fixes this problem by acting as a pirate site interface for every legal option, sort of like a TV guide to anything streaming, also giving you notifications anytime something is new, letting you track when certain content becomes available, and not only telling you where it’s available but taking you straight there with one click to play.”

Reelgood can be seen as a defragmentation tool, creating a uniform interface for all the legal platforms people have access to. In addition to paid services such as Netflix and HBO, it also lists free content from Fox, CBS, Crackle, and many other providers.

TorrentFreak took it for a spin and it indeed works as advertised. Simply add your streaming service accounts and all will be bundled into an elegant and uniform interface that allows you to watch and track everything with a single click.

The service is still limited to US libraries but there are already plans to expand it to other countries, which is promising. While it may not eradicate piracy anytime soon, it does a good job of trying to organize the increasingly complex streaming landscape.

Unfortunately, it’s still not cheap to use more than a handful of paid services, but that’s a problem even Reelgood can’t fix. Not even with help from seven former Popcorn Time developers.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Piracy Narrative Isn’t About Ethics Anymore, It’s About “Danger”

Post Syndicated from Andy original https://torrentfreak.com/piracy-narrative-isnt-about-ethics-anymore-its-about-danger-170812/

Over the years there have been almost endless attempts to stop people from accessing copyright-infringing content online. Campaigns have come and gone and almost two decades later the battle is still ongoing.

Early on, when panic enveloped the music industry, the campaigns centered around people getting sued. Grabbing music online for free could be costly, the industry warned, while parading the heads of a few victims on pikes for the world to see.

Periodically, however, the aim has been to appeal to the public’s better nature. The idea is that people essentially want to do the ‘right thing’, so once they understand that largely hard-working Americans are losing their livelihoods, people will stop downloading from The Pirate Bay. For some, this probably had the desired effect but millions of people are still getting their fixes for free, so the job isn’t finished yet.

In more recent years, notably since the MPAA and RIAA had their eyes blacked in the wake of SOPA, the tone has shifted. In addition to educating the public, torrent and streaming sites are increasingly being painted as enemies of the public they claim to serve.

Several studies, largely carried out on behalf of the Digital Citizens Alliance (DCA), have claimed that pirate sites are hotbeds of malware, baiting consumers in with tasty pirate booty only to offload trojans, viruses, and God-knows-what. These reports have been ostensibly published as independent public interest documents but this week an advisor to the DCA suggested a deeper interest for the industry.

Hemanshu Nigam is a former federal prosecutor, ex-Chief Security Officer for News Corp and Fox Interactive Media, and former VP Worldwide Internet Enforcement at the MPAA. In an interview with Deadline this week, he spoke about alleged links between pirate sites and malware distributors. He also indicated that warning people about the dangers of pirate sites has become Hollywood’s latest anti-piracy strategy.

“The industry narrative has changed. When I was at the MPAA, we would tell people that stealing content is wrong and young people would say, yeah, whatever, you guys make a lot of money, too bad,” he told the publication.

“It has gone from an ethical discussion to a dangerous one. Now, your parents’ bank account can be raided, your teenage daughter can be spied on in her bedroom and extorted with the footage, or your computer can be locked up along with everything in it and held for ransom.”

Nigam’s stance isn’t really a surprise since he’s currently working for the Digital Citizens Alliance as an advisor. In turn, the Alliance is at least partly financed by the MPAA. There’s no suggestion whatsoever that Nigam is involved in any propaganda effort, but recent signs suggest that the DCA’s work in malware awareness is more about directing people away from pirate sites than protecting them from the alleged dangers within.

That being said and despite the bias, it’s still worth giving experts like Nigam an opportunity to speak. Largely thanks to industry efforts with brands, pirate sites are increasingly being forced to display lower-tier ads, which can be problematic. On top, some sites’ policies mean they don’t deserve any visitors at all.

In the Deadline piece, however, Nigam alleges that hackers have previously reached out to pirate websites offering $200 to $5000 per day “depending on the size of the pirate website” to have the site infect users with malware. If true, that’s a serious situation and people who would ordinarily use ‘pirate’ sites would definitely appreciate the details.

For example, to which sites did hackers make this offer and, crucially, which sites turned down the offer and which ones accepted?

It’s important to remember that pirates are just another type of consumer and they would boycott sites in a heartbeat if they discovered they’d been paid to infect them with malware. But, as usual, the claims are extremely light in detail. Instead, there’s simply a blanket warning to stay away from all unauthorized sites, which isn’t particularly helpful.

In some cases, of course, operational security will prevent some details coming to light but without these, people who don’t get infected on a ‘pirate’ site (the vast majority) simply won’t believe the allegations. As the author of the Deadline piece pointed out, it’s a bit like Reefer Madness all over again.

The point here is that without hard independent evidence to back up these claims, with reports listing sites alongside the malware they’ve supposed to have spread and when, few people will respond to perceived scaremongering. Free content trumps a few distant worries almost every time, whether that involves malware or the threat of a lawsuit.

It’ll be up to the DCA and their MPAA paymasters to consider whether the approach is working but thus far, not even having government heavyweights on board has helped.

Earlier this year the DCA launched a video campaign, enrolling 15 attorney generals to publish their own anti-piracy PSAs on YouTube. Thus far, interest has been minimal, to say the least.

At the time of writing the 15 PSAs have 3,986 views in total, with 2,441 of those contributed by a single video contributed by Wisconsin Attorney General Brad Schimel. Despite the relative success, even that got slammed with 2 upvotes and 127 downvotes.

A few of the other videos have a couple of hundred views each but more than half have less than 70. Perhaps most worryingly for the DCA, apart from the Schimel PSA, none have any upvotes at all, only down. It’s unclear who the viewers were but it seems reasonable to conclude they weren’t entertained.

The bottom line is nobody likes malware or having their banking details stolen but yet again, people who claim to have the public interest at heart aren’t actually making a difference on the ground. It could be argued that groups advocating online safety should be publishing guides on how to stay protected on the Internet period, not merely advising people to stay away from certain sites.

But of course, that wouldn’t achieve the goals of the MPAA Digital Citizens Alliance.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

DMCA Used to Remove Ad Server URL From Easylist Ad Blocklist

Post Syndicated from Andy original https://torrentfreak.com/dmca-used-to-remove-ad-server-url-from-easylist-ad-blocklist-170811/

The default business model on the Internet is “free” for consumers. Users largely expect websites to load without paying a dime but of course, there’s no such thing as a free lunch. To this end, millions of websites are funded by advertising revenue.

Sensible sites ensure that any advertising displayed is unobtrusive to the visitor but lots seem to think that bombarding users with endless ads, popups, and other hindrances is the best way to do business. As a result, ad blockers are now deployed by millions of people online.

In order to function, ad-blocking tools – such as uBlock Origin or Adblock – utilize lists of advertising domains compiled by third parties. One of the most popular is Easylist, which is distributed by authors fanboy, MonztA, Famlam, and Khrinunder, under dual Creative Commons Attribution-ShareAlike and GNU General Public Licenses.

With the freedom afforded by those licenses, copyright tends not to figure high on the agenda for Easylist. However, a legal problem that has just raised its head is causing serious concern among those in the ad-blocking community.

Two days ago a somewhat unusual commit appeared in the Easylist repo on Github. As shown in the image below, a domain URL previously added to Easylist had been removed following a DMCA takedown notice filed with Github.

Domain text taken down by DMCA?

The DMCA notice in question has not yet been published but it’s clear that it targets the domain ‘functionalclam.com’. A user called ‘ameshkov’ helpfully points out a post by a new Github user called ‘DMCAHelper’ which coincided with the start of the takedown process more than three weeks ago.

A domain in a list circumvents copyright controls?

Aside from the curious claims of a URL “circumventing copyright access controls” (domains themselves cannot be copyrighted), the big questions are (i) who filed the complaint and (ii) who operates Functionalclam.com? The domain WHOIS is hidden but according to a helpful sleuth on Github, it’s operated by anti ad-blocking company Admiral.

Ad-blocking means money down the drain….

If that is indeed the case, we have the intriguing prospect of a startup attempting to protect its business model by using a novel interpretation of copyright law to have a domain name removed from a list. How this will pan out is unclear but a notice recently published on Functionalclam.com suggests the route the company wishes to take.

“This domain is used by digital publishers to control access to copyrighted content in accordance with the Digital Millenium Copyright Act and understand how visitors are accessing their copyrighted content,” the notice begins.

Combined with the comments by DMCAHelper on Github, this statement suggests that the complainants believe that interference with the ad display process (ads themselves could be the “copyrighted content” in question) represents a breach of section 1201 of the DMCA.

If it does, that could have huge consequences for online advertising but we will need to see the original DMCA notice to have a clearer idea of what this is all about. Thus far, Github hasn’t published it but already interest is growing. A representative from the EFF has already contacted the Easylist team, so this battle could heat up pretty quickly.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Automating Blue/Green Deployments of Infrastructure and Application Code using AMIs, AWS Developer Tools, & Amazon EC2 Systems Manager

Post Syndicated from Ramesh Adabala original https://aws.amazon.com/blogs/devops/bluegreen-infrastructure-application-deployment-blog/

Previous DevOps blog posts have covered the following use cases for infrastructure and application deployment automation:

An AMI provides the information required to launch an instance, which is a virtual server in the cloud. You can use one AMI to launch as many instances as you need. It is security best practice to customize and harden your base AMI with required operating system updates and, if you are using AWS native services for continuous security monitoring and operations, you are strongly encouraged to bake into the base AMI agents such as those for Amazon EC2 Systems Manager (SSM), Amazon Inspector, CodeDeploy, and CloudWatch Logs. A customized and hardened AMI is often referred to as a “golden AMI.” The use of golden AMIs to create EC2 instances in your AWS environment allows for fast and stable application deployment and scaling, secure application stack upgrades, and versioning.

In this post, using the DevOps automation capabilities of Systems Manager, AWS developer tools (CodePipeLine, CodeDeploy, CodeCommit, CodeBuild), I will show you how to use AWS CodePipeline to orchestrate the end-to-end blue/green deployments of a golden AMI and application code. Systems Manager Automation is a powerful security feature for enterprises that want to mature their DevSecOps practices.

Here are the high-level phases and primary services covered in this use case:

 

You can access the source code for the sample used in this post here: https://github.com/awslabs/automating-governance-sample/tree/master/Bluegreen-AMI-Application-Deployment-blog.

This sample will create a pipeline in AWS CodePipeline with the building blocks to support the blue/green deployments of infrastructure and application. The sample includes a custom Lambda step in the pipeline to execute Systems Manager Automation to build a golden AMI and update the Auto Scaling group with the golden AMI ID for every rollout of new application code. This guarantees that every new application deployment is on a fully patched and customized AMI in a continuous integration and deployment model. This enables the automation of hardened AMI deployment with every new version of application deployment.

 

 

We will build and run this sample in three parts.

Part 1: Setting up the AWS developer tools and deploying a base web application

Part 1 of the AWS CloudFormation template creates the initial Java-based web application environment in a VPC. It also creates all the required components of Systems Manager Automation, CodeCommit, CodeBuild, and CodeDeploy to support the blue/green deployments of the infrastructure and application resulting from ongoing code releases.

Part 1 of the AWS CloudFormation stack creates these resources:

After Part 1 of the AWS CloudFormation stack creation is complete, go to the Outputs tab and click the Elastic Load Balancing link. You will see the following home page for the base web application:

Make sure you have all the outputs from the Part 1 stack handy. You need to supply them as parameters in Part 3 of the stack.

Part 2: Setting up your CodeCommit repository

In this part, you will commit and push your sample application code into the CodeCommit repository created in Part 1. To access the initial git commands to clone the empty repository to your local machine, click Connect to go to the AWS CodeCommit console. Make sure you have the IAM permissions required to access AWS CodeCommit from command line interface (CLI).

After you’ve cloned the repository locally, download the sample application files from the part2 folder of the Git repository and place the files directly into your local repository. Do not include the aws-codedeploy-sample-tomcat folder. Go to the local directory and type the following commands to commit and push the files to the CodeCommit repository:

git add .
git commit -a -m "add all files from the AWS Java Tomcat CodeDeploy application"
git push

After all the files are pushed successfully, the repository should look like this:

 

Part 3: Setting up CodePipeline to enable blue/green deployments     

Part 3 of the AWS CloudFormation template creates the pipeline in AWS CodePipeline and all the required components.

a) Source: The pipeline is triggered by any change to the CodeCommit repository.

b) BuildGoldenAMI: This Lambda step executes the Systems Manager Automation document to build the golden AMI. After the golden AMI is successfully created, a new launch configuration with the new AMI details will be updated into the Auto Scaling group of the application deployment group. You can watch the progress of the automation in the EC2 console from the Systems Manager –> Automations menu.

c) Build: This step uses the application build spec file to build the application build artifact. Here are the CodeBuild execution steps and their status:

d) Deploy: This step clones the Auto Scaling group, launches the new instances with the new AMI, deploys the application changes, reroutes the traffic from the elastic load balancer to the new instances and terminates the old Auto Scaling group. You can see the execution steps and their status in the CodeDeploy console.

After the CodePipeline execution is complete, you can access the application by clicking the Elastic Load Balancing link. You can find it in the output of Part 1 of the AWS CloudFormation template. Any consecutive commits to the application code in the CodeCommit repository trigger the pipelines and deploy the infrastructure and code with an updated AMI and code.

 

If you have feedback about this post, add it to the Comments section below. If you have questions about implementing the example used in this post, open a thread on the Developer Tools forum.


About the author

 

Ramesh Adabala is a Solutions Architect in Southeast Enterprise Solution Architecture team at Amazon Web Services.

Internet Archive Blocked in 2,650 Site Anti-Piracy Sweep

Post Syndicated from Andy original https://torrentfreak.com/internet-archive-blocked-in-2650-site-anti-piracy-sweep-170810/

Reports of sites becoming mysteriously inaccessible in India have been a regular occurance over the past several years. In many cases, sites simply stop functioning, leaving users wondering whether sites are actually down or whether there’s a technical issue.

Due to their increasing prevalence, fingers are often pointed at so-called ‘John Doe’ orders, which are handed down by the court to prevent Internet piracy. Often sweeping in nature (and in some cases pre-emptive rather than preventative), these injunctions have been known to block access to both file-sharing platforms and innocent bystanders.

Earlier this week (and again for no apparent reason), the world renowned Internet Archive was rendered inaccessible to millions of users in India. The platform, which is considered by many to be one of the Internet’s most valued resources, hosts more than 15 petabytes of data, a figure which grows on a daily basis. Yet despite numerous requests for information, none was forthcoming from authorities.

The ‘blocked’ message seen by users accessing Archive.org

Quoted by local news outlet Medianama, Chris Butler, Office Manager at the Internet Archive, said that their attempts to contact the Indian Department of Telecom (DoT) and the Ministry of Electronics and Information Technology (Meity) had proven fruitless.

Noting that site had previously been blocked in India, Butler said they were no clearer on the reasons why the same kind of action had seemingly been taken this week.

“We have no information about why a block would have been implemented,” he said. “Obviously, we are disappointed and concerned by this situation and are very eager to understand why it’s happening and see full access restored to archive.org.”

Now, however, the mystery has been solved. The BBC says a local government agency provided a copy of a court order obtained by two Bollywood production companies who are attempting to slow down piracy of their films in India.

Issued by a local judge, the sweeping order compels local ISPs to block access to 2,650 mainly file-sharing websites, including The Pirate Bay, RARBG, the revived KickassTorrents, and hundreds of other ‘usual suspects’. However, it also includes the URL for the Internet Archive, hence the problems with accessibility this week.

The injunction, which appears to be another John Doe order as previously suspected, was granted by the High Court of the Judicature at Madras on August 2, 2017. Two film productions companies – Prakash Jah Productions and Red Chillies Entertainment – obtained the order to protect their films Lipstick Under My Burkha and Jab Harry Met Sejal.

While India-based visitors to blocked resources are often greeted with a message saying that domains have been blocked at the orders of the Department of Telecommunications, these pages never give a reason why.

This always leads to confusion, with news outlets having to pressure local government agencies to discover the reason behind the blockades. In the interests of transparency, providing a link to a copy of a relevant court order would probably benefit all involved.

A few hours ago, the Internet Archive published a statement questioning the process undertaken before the court order was handed down.

“Is the Court aware of and did it consider the fact that the Internet Archive has a well-established and standard procedure for rights holders to submit take down requests and processes them expeditiously?” the platform said.

“We find several instances of take down requests submitted for one of the plaintiffs, Red Chillies Entertainments, throughout the past year, each of which were processed and responded to promptly.

“After a preliminary review, we find no instance of our having been contacted by anyone at all about these films. Is there a specific claim that someone posted these films to archive.org? If so, we’d be eager to address it directly with the claimant.”

But while the Internet Archive appears to be the highest profile collateral damage following the ISP blocks, it isn’t the only victim. Now that the court orders have become available (1,2), it’s clear that other non-pirate entities have also been affected including news site WN.com, website hosting service Weebly, and French ISP Free.fr.

Also, in a sign that sites aren’t being checked to see if they host the movies in question, one of the orders demands that former torrent index BitSnoop is blocked. The site shut down earlier this year. The same is true for Shaanig.org.

This is not the first time that the Internet Archive has been blocked in India. In 2014/2015, Archive.org was rendered inaccessible after it was accused of hosting extremist material. In common with Google, the site copies and stores huge amounts of data, much of it in automated processes. This can leave it exposed to these kinds of accusations.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.